Strict Standards: Non-static method DbConn::base_escape_string() should not be called statically in /boinc/rsals/html/inc/boinc_db.inc on line 94
RSALS now stopped
RSALS is finally closed. All the operations were transferred to NFS@home at:
Now, please join NFS@home as soon as possible so that you continue to factor numbers. Lionel Debroux will continue to manage the numbers.
See the new famous crunching.php page at:
Only 4 numbers are being crunched, please come in to increase the new grid's power!
This website, showing users and credits, will stay here forever. Only the boinc programs are stopped.
thank you all for working with us.
squalyl 18 Sep 2012 | 19:18:13 UTC · Comment
RSALS shutting down at the end of August, please migrate to NFS@Home...
As announced 11 days ago, RSALS operations are being merged into the larger NFS@Home grid.
The initial tests have been very positive, so migration will now proceed as intended. NFS@Home has a newer and better version of the same siever program (derived and improved from the original RSALS program). The same persons will remain involved in providing numbers and using the results, and feeding the BOINC server.
We will stop feeding WUs into RSALS at the end of this month. You can finish your current work, and even take a bit more for the time being, but please schedule the migration of your BOINC clients to NFS@Home. You don't need to wait, you can migrate now.
Unlike RSALS, NFS@Home appears in the BOINC Manager's list of projects.
Our announcement seems to have sparked clients' interest into RSALS, as the power of RSALS raised from an indicated ~250-350 GFLOPS (on the status page) to more than 1300 GFLOPS at the time of this writing. Thanks for your interest, which helps multiple integer factoring projects, and thanks to volunteer post-processers for helping us :-)
If you don't want the NFS@Home WUs to use more RAM than the current RSALS WUs do, you'll have to make sure, in the preferences of your account on NFS@Home ( http://escatter11.fullerton.edu/nfs/prefs.php?subset=project ), that "lasieved" is the only enabled siever.
If you want to make your computers work on harder numbers, then you can enable the "lasievee", "lasievef" and "lasieve5f" sievers, in increasing order of memory requirements. lasieve5f can require more than 1 GB of RAM per core.
We hope to see you on NFS@Home soon :-)
Most of the BOINC server programs will be stopped, but the current RSALS web pages will stay here and will continue to show your contributions to this great project.
Lionel Debroux & squalyl for RSALS. 16 Aug 2012 | 5:29:19 UTC · Comment
RSALS moving to NFS@Home and shutting down...
After nearly three years of work, at first for factoring the 512-bit RSA keys used for validation in TI-Z80 and TI-68k graphing calculators, but soon repurposed for factoring integers of mathematical interest, RSALS is currently being moved to the larger NFS@Home grid, before being shut down in the next few weeks, after the current numbers (and perhaps a couple easy ones, to pick up the slack ?).
Clients connected to RSALS have participated in the factoring of about 400 fairly large composite integers, helping a number of projects interested in those factorizations. Thanks so much for your trillions of CPU cycles over those three years :-)
RSALS was the first BOINC grid using the Number Field Sieve algorithm, the most efficient known algorithm for large integers. The NFS@Home grid was created shortly after RSALS by Greg Childers, a.k.a "frmky", Associate Professor of Physics at California State Fullerton University, well-known in the integer factoring community and aiming at factorizations larger than RSALS could reach through the sole "14e" siever program it used, by using the larger "15e", "16e" and other sievers.
For almost three years, RSALS and NFS@Home were used in a complementary way; but the time has come to make these a single, more powerful grid with a single set of programs, rather than spending time installing RSALS to a newer server (the current one being expensive and under-powered) and importing NFS@Home's sievers into RSALS.
Thanks again to our BOINC clients, our post-processers, and the integer factoring community. We hope to see you on NFS@Home soon :-)
NOTE: if you really don't want the NFS@Home WUs to use more RAM than the current RSALS WUs do, you'll have to make sure, in the preferences of your account on NFS@Home ( http://escatter11.fullerton.edu/nfs/prefs.php?subset=project ), that "lasieved" is the only enabled siever.
For a more detailed version of this post, see the MersenneForum announcement.
Lionel Debroux & squalyl for RSALS. 4 Aug 2012 | 7:26:38 UTC · Comment
Yesterday, the disk space on the server overflowed, because I forgot to clean up some results whose factorization is complete.
To our dismay, corruption of the SQL database ensued, but oddly, several hours after the "disk full" condition had been cleared by suppressing 20+ GB of obsolete data. That's how we learnt, the hard way, that MySQL crapped out in ENOSPC circumstances...
squalyl has restored the SQL database from a complete dump made last week-end, I'll queue up new numbers tonight and handle the status of the numbers in-between (probably switching them to manual generation, as I did for dozens of numbers before squalyl wrote the automated work generation system).
Thanks to advice from Greg Childers (NFS@Home), WU priorities now work, so the "older" numbers will be distributed first.
Sorry for the inconvenience, and please bear with us - hopefully, the partial downtime will be short :)
debrouxl. 16 Nov 2011 | 9:57:01 UTC · Comment
<p>the server_status page was upgraded with some nice RRDTOOL graphs showing the current number of ready and in progress results.</p>
<a href="http://boinc.unsads.com/rsals/server_status.php">Server status</a></p>
<p>Now planning to graph the server disk usage.</p>
<p>squalyl</p> 5 Nov 2010 | 13:46:28 UTC · Comment
The server has currently a lot of uptime problems, sorry for that. This is unrelated to the planned upgrade.<br>
I don't know the cause yet. I'm working with the provider to find the cause.<br>
Thanks for your patience.<br>
squalyl 10 Jul 2010 | 8:25:00 UTC · Comment
The RSALS server hardware will be upgraded in the upcoming days.<br>
No action will be required from you. What you will observe is that the server will seem to be down for a few hours, then up.<br>
This message is there to tell you that this downtime is normal and under control.<br>
RSALS has NOT gone :)<br>
For those of you who are curious, the upgrade will happen in 4 phases.<br>
<ul><li>the new server will be set up, including linux, apache, mysql, BOINC server software, etc. with a copy of the DB at that point. Problems will be checked, including the ones related to possible new DB structure, upgraded web pages and forums, etc. We will then check a BOINC client is able to crunch test workunits from this server. This step will not impact your participation in the project, since the old server will still be operating at that point.</li>
<li>Once we are sure that the new server seems to operate correctly, the current server will be stopped. At that point, you will not be able to fetch and push work. The latest snapshot of the database will then be migrated to the new server.</li>
<li>the boinc.unsads.com DNS records will be updated, which will take some hours to complete in all regions of the world.</li>
<li>Once DNS replication seems to be OK in some major countries, the new server will be started, and you will then be able to resume your participation. A few hours later, everything will be back to normal.</li>
We will keep you updated through the project global emailing feature so that we're sure that you get the right information on time. We'll send you notices at the beginning and the completion of each step.<br>
Feel free to comment, ask questions, we will reply ASAP on the progress.<br>
squalyl 1 Jul 2010 | 7:49:12 UTC · Comment
Automatic work generation
Today an experimental work generator has been set up. It seems to work fine. The generator is a php script run from cron, that tries to keep a set of unsent work units ready for the clients. work generation is stopped when a predefined threshold is reached, this is a sort of feedback control :p .<br>
New numbers are now managed automatically, while old ones are still managed by hand.<br>
You can follow the crunching status on the <a href="http://boinc.unsads.com/rsals/crunching.php">crunching status page</a>, which has been improved since the last news. Numbers come in a waiting queue, the ready-to-be-sieved numbers are then moved in the active queue, and an archive of completed numbers is kept in the third part of the page.<br>
The next technical challenge now is post processing, to group result files in a single multigigabytes monster file, that is finally downloaded by volunteers with multi core, multi gigabytes RAM computers, for the final factorization steps. we must concatenate thousands of workunits in real time, ensuring they are properly compressed and valid before concatenation and deletion, or the whole file will become invalid. There are atomic operations involved, with failure recovery options if something goes wrong in the middle of the concatenation "transaction".<br>
And if that wasn't enough, we have to manage the small server disk, ensuring there is enough work AND enough room on the server to handle the results!<br>
This will ensure that you, our fellow researchers, do not stay without work for your silicon beasts, while reducing the burden for my fellow project manager, and for our beloved linear algebra volunteers.<br> 20 Apr 2010 | 20:44:17 UTC · Comment
Crunching status page
A page has been added to show the numbers currently being crunched: <a href="http://boinc.unsads.com/rsals/crunching.php">See here</a> 30 Mar 2010 | 9:57:15 UTC · Comment
WHAT TO DO WHEN THE SERVER IS OUT OF JOBS ?
Some of you may notice that sometimes, the server goes out of work.<br><br>
<ul><li>We're in control. Don't worry, and don't post a message each time this happens.</li>
<li>The project is managed by volunteers, so sometimes debrouxl and I won't have time to add jobs ASAP.</li>
<li>We crunch numbers for partner projects. This means that we need time to coordinate with these projects, reserve numbers, and add them to rsals. Sometimes it can be long, need negociation, and sometimes there are no requests, so we have to search for candidates numbers suited for the RSALS project.</li>
<li>This project is special because it creates much more data that it consumes. Input files are few hundred bytes, but results a cruncher generates are MEGABYTE sized, and a single number to crunch generates a total ~20GB of GZIPPED data. We also need time to gather this, concatenate the gzipped results, and ask the partner project owners to download these files for post processing.</li>
<li>The server has a single 160GB disk, which is most of the time 90% full, which means we must manage the disk space very precisely, and may require that we suspend computation from time to time.</li>
The main bottleneck is the post processing, because this process require quad/octo core workstations with quad/octo gigabyte RAM! These are not common machines. Because of this, we need to store the results for a long time, until we find someone that is able to run the final crunching step.
This final step cannot be distributed on multiple computers, it needs to have all data in a single memory space.
Thank you for your participation and your patience
squalyl. 16 Mar 2010 | 13:53:46 UTC · Comment
Stats were ok, now the folder containing them can be read too :)
GD was missing in my php setup, I just added it so you have no more problems with pictures on the forum. 19 Jan 2010 | 1:00:41 UTC · Comment
I am aware that the stats are no longer exported on
I am working now on getting them back, I don't know why the update_stats binary does not work anymore since the upgrade.
This will be resolved in the upcoming days, please don't send me messages saying "please fix the stats". I am working on it, I know the problem, but I'm short on time.
squalyl 27 Dec 2009 | 14:46:16 UTC · Comment
Server software upgraded
The server software was upgraded to the latest revision. 24 Dec 2009 | 10:32:26 UTC · Comment
News is available as an RSS feed