Wednesday, October 26, 2016

Testing MathJax in blogger!

Added MathJax libraries to the template of my blog (thanks to the explanation available on this blog), so this is a test: $$\lim_{x \to \infty}x^2$$ I will try to reformat my previous posts to update the contents!

Tuesday, September 6, 2016

A method to visualize the underlying patterns of fractal sequences

As Wikipedia says, a fractal sequence is one that contains itself as a proper subsequence. An example is $\{1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, ...\}$ If the first occurrence of each n is deleted, the remaining sequence is identical to the original. The process can be repeated indefinitely, so that actually, the original sequence contains not only one copy of itself, but rather, infinitely many.

Trying to find ways of visualizing the underlying patterns, I managed to convert a sequence to an elementary cellular automaton in which the first status (or step) of the automaton corresponds with the initial sequence, and the next status is a sequence generated by a replacement rule in which the closest element to the right of an element that is marked to be removed (following the definition of the fractal sequence) will invade (or "phagocyte") that position with its value (the invader value will increase its visual size). Applying this rule repeatedly, the evolution of the status of the automaton in time is visualized, and a fractal pattern arises.

Besides, if each element of the sequence of each status represents a binary bit (marked to be removed = $0$, not marked to be removed = $1$), it is possible to obtain a representation of the original sequence in terms of an equivalent sequence obtained by a binary manipulation of the status. Below there is an example of the algorithm and the questions are at the end:

1. For instance the following fractal sequence, (OEIS A000265), when the first appearance of each odd integer is removed -$1,3,5,7,9..$.-, the resulting sequence is the same sequence than the original one:

$$\{1,1,3,1,5,3,7,1,9,5,11,3,13,7,15,1,17,9,19,5,21...\}$$

2. The following image shows the first status on the top, which is the initial sequence marking the elements that will be removed (the first $1$, first $3$, first $5$, etc. in blue color; and the second status below the first status: the elements marked in the first status were "invaded" (or "phagocyted") by the right side elements that are not removed in the former step, so those elements increased their size. And then again we mark the elements that will be removed (the first $1$, first $3$, first $5$, etc.:


3. Repeating the process we will arrive to a fractal pattern like this one (e.g. in 5 steps):


4. Now if we check the vertical values on each element of the status, we can create an unique sequence equivalent to the original fractal sequence. Starting with the most left side column and assuming that the first status (top) represents $2^0$, the second status represents $2^1$, and the color represents the value of the multiplying term (*0 if blue or *1 if not blue), we obtain the sequence:

$$\{0,1,2,3,4,5,6,7,8,9...\}$$

For instance the first column is $0 \cdot 2^0 + 0 \cdot 2^1 + 0 \cdot 2^2 ... = 0$, the second column is $1 \cdot 2^0 + 0 \cdot  2^1 + 0 \cdot 2^2 ... = 1$, etc.

So, by this method, the equivalent sequence of $\{1,1,3,1,5,3,7,1,9,5,11,3,13,7,15,1,17,9,19,5,21...\}$ is $\{1,2,3,4,5,6,7,8,9,10,11,12...\}$.

The equivalent sequence vary depending on the original fractal sequence, these are some examples (click to expand the image):


After thinking a little bit more about the options, this is a possible way of showing the underlying patterns: for the same example as above, OEIS A000265, each initial number of the sequence (or first status of the automaton) is represented by a radius $1$ circle (yellow).

In the second step, the elements marked to be removed were "invaded" by the closest elements at their right side. The invader element grew. We will show that growth by adding a new circle with a radius that covers both the invaded element (represented by its former step circle) and the invader (also represented by its former step circle).

That new circle is e.g. shown in red color. When we repeat the algorithm, or in other words, we continue evolving the automaton shown in the question some more steps, finally the pattern starts to arise:


I have asked at MSE if there are other methods to show the underlying patterns of the fractal sequences.

Tuesday, July 12, 2016

Euler's "Lucky" constant

Based on my results in the previous post regarding a prime-representing function based on Bertrand's postulate I was able to produce a new constant related with Euler's "Lucky" numbers. Initially I did not find this constant in any bibliography or reference, and it seems a nice use of a Mills-like constant. So here are the results:

Euler's "Lucky" constant $E = 2.893392257682316134127494663$

This constant is such that $\lfloor{E^{2^n}}\rfloor-(\lfloor{E^{2^{n-1}}\rfloor})^2+\frac{\lvert n-(\frac{1}{2}) \rvert}{(\frac{1}{2})-n}$ for $n=[0..5]$  provides the sequence of Euler's "Lucky" prime numbers in growing order


Euler's lucky constant E is a Mill's like constant obtained by the encapsulation of Euler's "Lucky" numbers into integers E(n) of [N,(N+1)^2] intervals. This provides a representing function of Euler's "Lucky" primes for n=[0..5].

The calculation of the sequence of elements that will be the lower bounds of the [N,(N+1)^2] intervals is as follows:

E(0)=(2)
E(1)=3+1+(E1^2) = 8
E(2)=5+1+(E2^2) = 70
E(3)=11+1+(E3^2) = 4912
E(4)=17+1+(E4^2) = 24127762
E(5)=41+1+(E5^2) = 582148899128686

And finally Euler's "Lucky" constant (E) has the following value:

E=E(5)^(1/(2^5))=2.893392257682316134127494663





The above manipulation encapsulates the "Lucky" primes into a different upper interval, so the following calculation returns the "Lucky" primes back from the representing function:

Lucky(n+1)=floor(E^(2^n))-(floor(E^(2^(n-1))))^2+(abs(n-(1/2))/((1/2)-n))
for n=0..5

Thus:

Lucky(1)=floor(E^(2^0))-(floor(E^(2^(0-1))))^2+(abs(0-(1/2))/((1/2)-0))=2

Lucky(2)=floor(E^(2^1))-(floor(E^(2^(1-1))))^2+(abs(1-(1/2))/((1/2)-1))=3

Lucky(3)=floor(E^(2^2))-(floor(E^(2^(2-1))))^2+(abs(2-(1/2))/((1/2)-2))=5

Lucky(4)=floor(E^(2^3))-(floor(E^(2^(3-1))))^2+(abs(3-(1/2))/((1/2)-3))=11

Lucky(5)=floor(E^(2^4))-(floor(E^(2^(4-1))))^2+(abs(4-(1/2))/((1/2)-4))=17

Lucky(6)=floor(E^(2^5))-(floor(E^(2^(5-1))))^2+(abs(5-(1/2))/((1/2)-5))=41

Q.E.D.


The term (abs(n-(1/2))/((1/2)-n)) is just a correction to calculate properly the initial case Lucky(1) associated with n=0 and join it to the rest of cases. The value E(0) is an special case because there is not previous term. (abs(n-(1/2))/((1/2)-n))=+1 for n=0 and =-1 for the rest of cases.

Finally, the following polynomial in two variables provides the complete set Euler primes (A196230) for n=[0..5] and x = [1..(floor(E^(2^n))-(floor(E^(2^(n-1))))^2+(abs(n-(1/2))/((1/2)-n)))-1]:

x^2-x+[floor(E^(2^n))-(floor(E^(2^(n-1))))^2+(abs(n-(1/2))/((1/2)-n))] 

I think that this is a very nice application of a Mills-like constant, we can obtain the whole set of Euler's "Lucky" numbers just using a single expression.

Thursday, July 7, 2016

A prime-representing function based on Bertrand's postulate and Mill's constant

P.S. This is a backup of my question at MSE.

Wikipedia explains in number theory, Mills' constant is defined as:

"The smallest positive real number A such that the floor function of the double exponential function floor(A^(3^(n)) is a prime number, for all natural numbers n. This constant is named after William H. Mills who proved in 1947 the existence of A based on results of Guido Hoheisel and Albert Ingham on the prime gaps. Its value is unknown, but if the Riemann hypothesis is true, it is approximately 1.3063778838630806904686144926... (sequence A051021 in OEIS)."

In other words floor(A^(3^(n))) is said to be a prime-representing function, because f(x) is a prime number for all positive integral values of x.

The demonstration made by Mills in 1947 (pdf here) is very easy to follow (one must be careful about the already known typos of the paper to follow properly the explanation).

It is also known that floor(Q^k^n) always works if k is at least 3, so there are other constants not defined yet that will work just if k is equal to or more than 3. It is also kown that k=2 might not work because it depends on the Legendre's conjecture: is there always a prime between N2 and (N+1)^2. It it's thought to be extremely difficult.

Based on the original demonstration and using the Bertrand's postulate as key for some manipulations I was able to find a constant L that for floor(L^2^n) provides a sequence of integers (the difference is that they are not primes directly) such as there is an associated sequence of primes obtained from them. Here is how it works:

According to Bertrand's postulate for a given prime p_i it holds:

p_i < p_j < 2p_i for some existing prime p_j

Now adding  +p_i^2+1:

p_i+p_i^2+1 < p_j+p_i^2+1 < 2p_i+p_i^2+1

Which is also true for:

p_i^2 < p_j+p_i^2+1 < (p_i+1)^2

Calling e_j = p_j+p_i^2+1 it is possible to build a sequence of integers E_0, E_1,...E_n such as:

E_0=P_0=2

E_0^2 < E_1 < (E_0+1)^2

E_1^2 < E_2 < (E_1+1)^2

...

E_(n-1)^2 < E_n < (E_(n-1)+1)^2
...

The same conditions for the sequence shown in Mill's demonstration would hold for this expression, and in this case the exponent is 2 instead 3. It is possible because the sequence is not directly a sequence of primes, but a sequence of integers attached to primes by Bertrand's postulate according to the manipulation:

P_n = E_n-(E(n-1)^2-1)

An in the same fashion as Mill's constant:

E_n=floor(L^(2^(n)))

Where L is obtained after some manual calculations (up to n=11) as

L_11=1.71979778440410781908547548439826963...

The way of manually calculating the constant is as follows, in the same fashion that Mill's demonstration, the relationship between L_n (the constant calculated after knowing the nth element of the sequence E_n) and E_n is:

L_n=E_n^(1/(2^n))

These are the calculations for the first 5 elements in Excel (the accuracy of the decimals of L is not so good, but it works properly).

L slightly grows on every iteration but the growth is each time smaller, so it clearly tends to a fixed value when n tends to infinity.

(graph here)

For instance for n=1..5 floor(L^(2^(n)) is:

E_1=2

E_2=8 and P_1=E_2-(E_1)^2-1=8-2^2-1=3

E_3=76 and P_2=E_3-(E_2)^2-1=76-8^2-1=76-64-1=11

E_4=5856 and P_3=E_4-(E_3)^2-1=5856-76^2-1=5856-5776-1=79

E_5=34298594 and P_4=E_5-(E_4)^2-1=34298594-5856^2-1=5857

etc.

So in essence the prime-representing function would be:

floor(L^(2^(n))-E_(n-1) for n>=2

being the starting element of the sequence E_1=2 and L the value of L_n when n tends to infinity.

This is the Python code to calculate and test L:

def mb():
    from sympy import nextprime
    from gmpy2 import is_prime
    import matplotlib.pyplot as plt
    import matplotlib as mpl
    import decimal
   
    print("L constant test. Equivalent to Mill's constant applying Bertrand's postulate")
    print("----------------------------------------------------------------------------")
    print()
   
    # Decimal precision is required, change according to test_limit
    decimal.getcontext().prec = 2000
    # List of elements E_n generated by L^(2^(n))
    E_n = []
    # List of progressive calculations of L_n, the last one of the list is the most accurate
    # capable of calculate all the associated primes
    L_n=[]
    # List of primes obtained from L^(2^(n))-E_(n-1) associated to the Bertrand's postulate
    P_n=[]
    # depth of L: L will be able to calculate the primes L^(2^(n))-E_(n-1) for n=1 to test_limit-1
    test_limit=12
    for n in range(1,test_limit):
        if n==1:
            # E_1=2
            E_n.append(2)
        else:
            # E_n=(E_(n-1)^2)+1
            # Be aware that the Python list starts in index 0 and the last current element is index n-1:
            # that is why n-2 appears in the calculation below
            E_n.append((E_n[n-2]**(decimal.Decimal(2)))+P_n[n-2]+1)
        # Next prime greater than E_n: it will be in the interval [E_(n-1),2*E_(n-1)] (Bertrand's postulate)
        P_n.append(nextprime(E_n[n-1]))
        # Calculation of L_n
        L_n.append(E_n[n-1]**(decimal.Decimal(1)/(decimal.Decimal(2)**(decimal.Decimal(n)))))
   
    print("List of Elements of L^(2^(n)) for n = 1 to " + str(test_limit-1) + ":")
    mystr = ""
    for i in E_n:
        mystr=mystr+str(i)+"\t"
    print(mystr)
   
    print()
    print("List of Primes obtained from L constant: L^(2^(n))-E_(n-1)-1 for n = 1 to :" + str(test_limit-1) + ":")
    mystr = ""
    for i in P_n:
        mystr=mystr+str(i)+"\t"
    print(mystr)
   
    print()
    print("List of calculations of L_n (the most accurate one is the last one) for n = 1 to :" + str(test_limit-1) + ":")
    mystr = ""
    ax = plt.gca()
    ax.set_axis_bgcolor((0, 0, 0))
    figure = plt.gcf()
    figure.set_size_inches(18, 16)
    n=1
    for i in L_n:
        mystr=mystr+str(i)+"\t"
        plt.plot(n,i,"w*")
        n=n+1
    print(mystr)
   
    # Print the graph of the evolution of L_n
    # Clearly it tends to one specific value when n tend to infinity
    plt.show()
   
    #Testing the constant
    print()
    print("L Accuracy Test: using the most accurate value of L will calculate the primes associated")
    print("to the constant and will compare them to the original primes used to create the constant.")
    print("If they are the same ones, the constant is able to regenerate them and the theoretical background")
    print("applied would be correct.")
    #Using the most accurate value of L available
    L = L_n[len(L_n)-1]
    print()
    print("L most accurated value is:")
    print(L)
    print()
    for i in range(1,test_limit):
        print()
        tester = int(L**(decimal.Decimal(2)**(decimal.Decimal(i))))
        if i==1:
            tester_prime = 2
        else:
            tester_prime = decimal.Decimal(tester) - (E_n[i-2]*E_n[i-2]) - 1
        print("Current calculated E:\t" + str(tester) + " for n = " + str(i))
        print("Original value of E:\t" +str(E_n[i-1]) +  " for n = " + str(i))
        if tester == E_n[i-1]:
            print("Current calculated prime:\t" + str(tester_prime) +  " for n = " + str(i))
            if i==1:
                print("Original value of prime:\t2 for n = " + str(i))
            else:
                print("Original value of prime:\t" + str(P_n[i-2]) + " for n = " + str(i))
        else:
            # If we arrive to this point, then the constant and the theory would not be correct
            print("ERROR L does not generate the correct original E")
            return
    print()
    print("TEST FINISHED CORRECTLY: L generates properly the primes associated to the Bertrand's postulate")
mb()


Initially L=1.71979778... seems not have been defined before, but who knows. At least I was not able to find such constant Mill's constant-related papers on Internet. I find it interesting. I am trying to calculate a more accurate value of the constant (will try to update this post as soon as possible). The reality is that it is able to obtain only prime numbers if the precision is correct (in the same way as the Mill's constant).

The advantage of L versus A (the original Mill's constant) is:

1. The growing rate of the double exponential is lower due to the use of powers of 2 instead of powers of 3 or more, so for the same quantity of n values calculated L provides smaller primes than A in less time.

2. The use of L^(2^(n)) does not require the validation of Legendre's conjecture because it depend on Bertrand's postulate, so the theory would hold.

I have made a question at MSE about this calculations. According to the feedback, initially it seems that the manipulations I did for the theory are correct and could open an alternative way of calculating prime-representing functions with lower rate of growth. I am wrapping-up the ideas and preparing a paper. I will try to update this post as soon as I finish it.

Wednesday, April 27, 2016

Modelling a Subset problem: An interesting sequence obtained from an orbits simulator (III)

This is the third part of the previous posts about the topic "an interesting sequence obtained from an orbit simulator" and its second part.

Several months have passes and still I do not know very much about the properties of my initial subset problem, so what I am trying to do is first trying to write all more properly, so instead of talking about an example (an orbits simulator) I am trying to explain the algorithm mathematically. So here is what so far I was able to do:

It seems that my original problem could fit into this area:

>Partition Problem > multi-way partition problem

Which is defined as (Wikipedia) "the optimization version of the partition problem. Here, the goal is to divide a set or multiset of n integers into a given number k of subsets, minimizing the difference between the smallest and the largest subset sums."

For that reason I am using as a lead paper to gather ideas:  "Korf, Richard E. (2009). [Multi-Way Number Partitioning (PDF)][2]. IJCAI."

...but I am not really sure if my defined problem could be included in that kind of generic problems because instead of using a set of integers I am using a "FIFO first-in-first-out pile" of integers F, and the subsets are created "on the fly", meaning that you just can take the head element when using the algorithm that creates the partitions to populate the generated subsets. Here is the description / algorithm of the problem:

1. Initially there is a FIFO set F and an empty list S of subsets. Every time a subset is generated it will have an index i, for instance S_i. The first subset will be S_0, then S_1, etc. The main rule is that in every moment the sum of the elements of any S_j will be smaller or equal to the sum of the elements of any S_i with an index i smaller than j.

2. Take the head element of the FIFO set and

a) If the sum of the elements of the subset with greatest index $S_k$ is smaller or equal to the new element, then create a new subset $S_{k+1}$ and insert the element.

b) If not, then insert the element in the existing subset S_j with the greatest possible index j such as when the element is aggregated the sum of the subset S_j is still smaller or equal than any sum of any subset S_i with lower index i than j. If no subset with index greater than 0 complies with the rule, then the element is inserted into S_0 (if S_0 still was not generated, it is created and then the element is inserted).

3. Repeat from step 2 until the FIFO set is empty.

Example with a FIFO set F= N^{+} containing the natural numbers not including 0 in strictly increasing order {1,2,3,4,5...}:

1. Take the head element, 1 , a) look for the existing subset available at S with the greatest index. As S is still S empty, create one (in this case S_0) and add it S_0={1}.

2. Take head element, 2, and a) go the the subset with available with the greatest index, in this case S_0. As 2 is not smaller than the sum of the elements of the subset (1) then applying b) 2 is inserted into S_0.

3. Take the head element, 3, and go the the existing subset with a greatest index (in this case there is only one S_0). a) As the new element 3 is smaller or equal to the sum of the elements of S_0 (3) it is possible to create a new subset. So the element is added to S_1, S_1 = {3}.

So far we got two subsets S={S_0,S_1} and the sum of S_j <=  the sum of S_i for all i<j.

4. If the process is repeated for the next element head of the pile, 4, as the sum of S_1 is 3 not >= 4 then we can not create a new subset, so we try to add 4 to S_1, but if we add 4 to S_1 then the sum of S_0 (3) will not be greater or equal to the sum of S_1 that would be 3+4=7 so we can not add 4 to S_1. We go down one level, now we try to add 4 to S_0 as there are not other subsets lower than S_0 we can add 4 to S_0.

So far the subsets are:

S={S_0={1,2,4}, S_1={3}}

After some repetitions of this algorithm, the subsets look like this (this image was already shown in the first post I did about the topic):




There are two interesting subsets there, S_0 and the set of first elements of each S_i, I will call it F.

S_0={1,2,4,5,8,12,17,19...}. S_0 will always be the subset with the greatest value of the sum of its elements.

F={1,3,7,11,16,25...}. The elements of this subset are those elements of the initial FIFO set capable of forcing the generation of a new subset.

Tried to find them at OEIS, but unluckily they are not included.

When I thought about this problem for the first time (in the first post above mentioned) I was thinking about an application of this algorithm regarding a model for energy levels in different orbits. Imagine a "nucleus" that can have several orbits around, initially there are no orbits, then arrives a chunk of energy 1 and it is located in the lower orbit equivalent to S_0, then arrives another chunk of energy 2 but it can not be located in a second orbit or level of energy because the energy of a greater level can not be greater than the energy in the lower levels, so it remains at level S_0. But when the next chunk 3 arrives, it can be located in a new level of energy because there is at least the same quantity of energy in the lower levels.

By doing the algorithm as above, always the lower levels (the levels closer to the "nucleus") have more energy than the outer levels, and always any level of energy. Or saying the same from the mathematical model: the previously generated subsets will always have a sum greater or equal than any other lately generated subsets.

Basically I am lost about how to catalog this (and then expand it, study the properties, etc.) so I have also asked a question regarding to this topic at MSE (link here).

Monday, April 11, 2016

Can we replace the upper limit condition of the Sieve of Eratosthenes sqrt(n) with the value sqrt(p) where p is the closest prime < n ?

By chance I stumbled upon the OEIS list A033677 of the smallest divisor of n greater or equal to sqrt(n). Roughly speaking if we use the classic enhanced sieve of Eratosthenes, sqrt(n) is the typical upper limit to find a divisor d>1 of n, so if n is composite then it is expected that exists d in [2,sqrt(n)].

(*This is a mirror of my same question at MSE.)

I did the same exercise but instead of sqrt(n) using as the limit the square root of the previous closest prime p to n and made the calculation of the smallest divisor of n greater or equal to sqrt(p).

Testing with Python and if I did not make an error, curiously I obtained exactly the same list than A033677 for every n in [3,6*10^6] (after that point it gets slower, I will try with PARI/GP later). So I wonder if it is possible to use the above mentioned sqrt(p) upper limit instead of sqrt(n) to find a divisor of n when sieving.

This is the (very basic) Python code that compares one by one the elements of the sequence A033677 with the sequence of smallest divisor of n greater or equal to sqrt(p), please feel free to use it:

    from sympy import prevprime, divisors
        for n in range (3,10000000):
            A0033677_Value=0
            for d in divisors(n):
                if d>=sqrt(n):
                    A0033677_Value=d
                    break
                   
            for d in divisors(n):
                if d>int(sqrt(prevprime(n))):
                    if A0033677_Value == d:
                        print("SUCCESS "+str(n))
                    else:
                        print("ERROR " + str(n) + "\t" + str(A0033677_Value)+"\t"+str(prevprime(n))+"\t"+str(d))
                        return
                    break


If the observation is true, the property would be something like this:

for all n (composite) exists d: d|n, d in [2,sqrt(p1)], p1 is prime, p1<n, and not exists p2: p1 < p2 < n.

For instance for n=20,21,22,23 we would look for a divisor of n in the interval [2,sqrt(19)] because 19 is the closest prime lower than those n. And we already sieved 19 in a former step so it is in our list of already sieved primes and thus available to be used (meaning we already know at this point that 19 is prime). Then applying Eratosthenes as usual it would be detected that n=20,21,22 are composite and n=23 is the next prime.

In some cases (e.g when n is a perfect square) having the same divisor d over sqrt(n) and sqrt(p) does not imply that we can find a divisor at the interval [2,sqrt(p)] and it is required to go up to [2,sqrt(p)+1], meaning that the first divisor is exactly sqrt(p)+1.

If true I am not sure if it might be significant. By Bertrand's postulate, there should be a prime between [n/2,n], so in the best of cases, the previous prime would be p=n/2, so it would imply for that best of cases the use of sqrt(n/2) as the upper limit for the sieve instead of sqrt(n).

So I wonder, would it be possible to use sqrt(p) instead of sqrt(n)? Was already in use that approach? Is there a counterexample (probably very big)? If not, what could be the reason behind it?

The code below tests the Sieve of Eratosthenes using the modified algorithm as explained above, including the special condition for perfect squares. It seems to work so far up to 10^5.
    from gmpy2 import is_prime, is_square
        # modified Eratosthenes, upper sieving limit sqrt(p)
            lopM=[2] #list of already sieved primes
            lp=2 # last prime sieved
            for pos in range(3,1000008):
                ul=int(sqrt(lp)) #upper limit
                if is_square(pos):
                    ul=ul+1 # correction for perfect squares
                    # basically: if is a perfect square we would continue for
                if ul==1:
                    ul=2 # exclude 1 of possible divisors
                comp=False #composite detected
                for j in lopM: # using the list of already known primes
                    if j in range(2,ul+1): # while j is under the upper limit
                        if (pos%j)==0:
                            comp=True
                            break
                    else: # we arrived to the upper limit
                        break
                    if comp==False:
                        lopM.append(pos)
                        lp=pos
           
            for elem in lopM:
                if not is_prime(elem): # sanity check
                    print("ERROR, not prime " + str(elem))
                    return
            print("Primes sieved: " + str(len(lopM)) + " and last prime was " + str(lp))


So heuristically it seems correct, but is there a demonstration or a counterexample of it?

Wednesday, January 13, 2016

Some heuristics and conjectures about the Pisano Period, primes and Fibonacci primes.

This week I did some tests regarding the Pisano period (and a question at MSE) and I found the following four conditions to be true for the first 10.000 values of pi(n) (Pisano period for n).


I do not understand the reasons for the results: points 1-3 would work as a primality test, but it does not detect all the possible primes, only a subset of them, e.g. {2,5,47,107,113,139,…}do not comply with points 1-3 and are not detected. And specially the last point, if the test is correct, would mean that the Pisano period of a Fibonacci prime is exactly four times the index of the Fibonacci prime in the Fibonacci sequence when the index is greater than 5 (being F5=5) . For instance: pi(1597)=68 and 68/4=17 which is exactly the index of 1597 in the Fibonacci sequence, F(17)=1597.

According to the answer and comments in the question at MSE it is not clear if it would be true or not yet (there could be big pseudoprime counterexamples). After doing the question, I noticed that the first one (1) was already registered at the OEIS sequence in the "formula" section. It does not mean that it is correct, but at least nobody removed that information since 2002, so it might be significative. My other observations are not registered at OEIS though.

Wednesday, January 6, 2016

How to solve simple Chinese Remainder Theorem problems when there are fractions in the congruence equations

Lately I have been trying to learn the use of the Chinese Remainder Theorem (CRT), and usually it gets quite complicated because sometimes will appear fractions in the congruence equations, I was looking for information at MSE (credits go for all the nice people answering questions there!) and finally I was able to solve a simple case of CRT including fractions! These are my personal notes, it is just not to forget it and probably there are some lags, so be careful if you use it. Initially I think the explanation is correct and at least will be able to guide the beginners like me in this kind of problems.


Generic Problem: 1/B = C (mod D) where I want to know the value of  C and I know B and D. Constraint: B must be coprime to D.

We will use these equivalences:

Bx+1 = 0 (mod Bx+1)
1 = -Bx (mod Bx+1)
(a) 1/B = -x = -x+(Bx+1) = (B-1)x+1 (mod Bx+1)

Bx+1=D, then x=(D-1)/B, replacing x at (a) we have:

(b) 1/B = ((B-1)(D-1)/B)+1 (mod D)

Example:
1/2 = ? (mod 7)
2 is coprime to 7
2x+1=0 (mod 2x+1)
1=-2x (mod 2x+1)
1/2=-x=-x+2x+1=x+1 (mod 2x+1)
2x+1=7, thus x=3, then:
1/2 = 4 (mod 7)

Generic Problem: A/B = C (mod D) where I want to know the value of  C and I know A,B and D. Constraint: A!=1 and B must be coprime to D.

A/B = C (mod D)

This is equivalent to solve the following couple of congruences:

(a) Ax = C (mod D)
(b) Bx = 1 (mod D) 

First solve (b) Bx = 1 (mod D) by a special case of the Euclidean Algorithm (original from Gauss), I will call it GEA:

-------------------------
Algorithm GEA: we want to find the values x for Bx = R (mod D) knowing B,R and D:

(1) Convert R/B into NR/NB ( for the least N such as NB >= D) 
(2) Reduce mod D both terms of the fraction and then simplify the fraction if possible
(3) If denominator is 1 finish, if not again (1)
(4) The numerator N is the solution so:

x = N (mod D)

That implies:

x=Dk+N for a certain k
-------------------------
 We have solved (b), so now replacing at (a) x = Dk+N

(c) A(Dk+N) = C (mod D)

So now we can simplify the expression to obtain the value of C that will depend on the variable k.

Example:

6/5 = ? (mod 7)

5 is coprime with 7

is equivalent to:

(a) 6x=? (mod 7)
(b) 5x=1 (mod 7)

Solve (b)

GEA: 1/5 = 2/10 = 2/3 = 6/9 = 6/2 = 3 , thus:

(b) x=3 (mod 7)

meaning:

x=7k+3

Replacing in (a)

(a) 6(7k+3) (mod 7) = 42k+18 (mod 7) = 0+4 (mod 7)  = 4 (mod 7)

meaning that:

6/5 = 4 (mod 7)

As I said these are my personal notes, please use them carefully!