recentpopularlog in

kme : performance   109

« earlier  
Hardware | Elasticsearch: The Definitive Guide [2.x] | Elastic
A machine with 64 GB of RAM is the ideal sweet spot, but 32 GB and 16 GB machines are also common. Less than 8 GB tends to be counterproductive (you end up needing many, many small machines), and greater than 64 GB has problems that we will discuss in Heap: Sizing and Swapping.


Interesting node about the I/O scheduler used on a system with SSDs:
Check Your I/O Scheduler

If you are using SSDs, make sure your OS I/O scheduler is configured correctly. When you write data to disk, the I/O scheduler decides when that data is actually sent to the disk. The default under most *nix distributions is a scheduler called cfq (Completely Fair Queuing).

This scheduler allocates time slices to each process, and then optimizes the delivery of these various queues to the disk. It is optimized for spinning media: the nature of rotating platters means it is more efficient to write data to disk based on physical layout.

This is inefficient for SSD, however, since there are no spinning platters involved. Instead, deadline or noop should be used instead. The deadline scheduler optimizes based on how long writes have been pending, while noop is just a simple FIFO queue.

This simple change can have dramatic impacts. We’ve seen a 500-fold improvement to write throughput just by using the correct scheduler.
searchandindex  sysadmin  elasticsearch  architecture  systemrequirements  performance 
10 weeks ago by kme
A Spellchecker Used to Be a Major Feat of Software Engineering
Writing a spellchecker in the mid-1980s was a hard problem. Programmers came up with some impressive data compression methods in response to the spellchecker challenge. Likewise there were some very clever data structures for quickly finding words in a compressed dictionary. This was a problem that could take months of focused effort to work out a solution to. (And, for the record, reducing the size of the dictionary from 200,000+ to 50,000 or even 20,000 words was a reasonable option, but even that doesn't leave the door open for a naive approach.)

Fast forward to today. A program to load /usr/share/dict/words into a hash table is 3-5 lines of Perl or Python, depending on how terse you mind being. Looking up a word in this hash table dictionary is a trivial expression, one built into the language. And that's it. Sure, you could come up with some ways to decrease the load time or reduce the memory footprint, but that's icing and likely won't be needed. The basic implementation is so mindlessly trivial that it could be an exercise for the reader in an early chapter of any Python tutorial.

That's progress.
programming  performance  progress 
january 2019 by kme
Apache Bench and Gnuplot: you’re probably doing it wrong | http://www.bradlanders.com/
<code># Let's output to a jpeg file
set terminal jpeg size 500,500
# This sets the aspect ratio of the graph
set size 1, 1
# The file we'll write to
set output "graphs/sequence.jpg"
# The graph title
set title "Benchmark testing"
# Where to place the legend/key
set key left top
# Draw gridlines oriented on the y axis
set grid y
# Label the x-axis
set xlabel 'requests'
# Label the y-axis
set ylabel "response time (ms)"
# Tell gnuplot to use tabs as the delimiter instead of spaces (default)
set datafile separator '\t'
# Plot the data
plot "data/testing.tsv" every ::2 using 5 title 'response time' with lines
exit</code>
apache  benchmarking  performance  gnuplot  visualization  webmaster  sysadmin 
december 2018 by kme
innodb - Internal reason for killing process taking up long time in mysql - Database Administrators Stack Exchange | https://dba.stackexchange.com/
Possible causes: InnoDB takes forever to roll back mass inserts. Also, innodb_buffer_pool_size might've been too small (default is only like 128 MB).
innodb  mysql  performance  optimization  errormessage  itsslow  maybesolution 
november 2018 by kme
mysql - Roughly how long should an "ALTER TABLE" take on an 1.3GB INNODB table? - Server Fault | https://serverfault.com/
My problem was a pending metadata lock; see https://stackoverflow.com/a/23757255/785213.

I actually just needed to stop the web server, because the Flask-Admin application had a lock on the table metadata, and I was trying to ALTER TABLE changing column data types... wouldn't let me.
dba  mysql  performance  altertable 
february 2018 by kme
database - what is a reasonable value for max_allowed_packet for Drupal 7 with moderate traffic? - Drupal Answers | https://drupal.stackexchange.com/
Regarding your situation, you should find out what is the biggest BLOB in your database, multiple that number by 11 and set your max_allowed_packet to that number. You should be able to set it for the server without a mysql restart (Personally, I would set it 256M because it would address other problems regarding migration and replication, which is beyond the scope of this forum). To set it to 256M for your database for all incoming connections, please run this:

<code class="language-sql">SET GLOBAL max_allowed_packet = 1024 * 1024 * 256;</code>

Afterwards, add this setting to my.cnf under the [mysqld] section:
<code class="language-ini">
[mysqld]
max_allowed_packet = 256M</code>
mysql  performance  errormessage  my.cnf  configuration  dba  maybesolution 
february 2018 by kme
Static AMP page
Dozens of publishers and technology companies have come together to create this unfortunate initiative. However, it is 2015, and websites should be small and fast enough to render on mobile devices rapidly using minimal resources. The only reason they are not is because we are addicted to tracking, surveillance, gratuitous animation, and bloated, inefficient frameworks. Requiring a readable version of these sites is a great idea. Let's take it one step further and make it the only version.
mobileweb  google  performance  parody  webdevel 
june 2017 by kme
What are the performance characteristics of sqlite with very large database files? - Stack Overflow


We are using DBS of 50 GB+ on our platform. no complains works great. Make sure you are doing everything right! Are you using predefined statements ? *SQLITE 3.7.3

Transactions
Pre made statements

Apply these settings (right after you create the DB)

PRAGMA main.page_size = 4096;
PRAGMA main.cache_size=10000;
PRAGMA main.locking_mode=EXCLUSIVE;
PRAGMA main.synchronous=NORMAL;
PRAGMA main.journal_mode=WAL;
PRAGMA main.cache_size=5000;

Hope this will help others, works great here
dba  sqlite  database  performance 
january 2017 by kme
PHP: isset vs. array_key_exists – ageek.de

Klare Sache: isset ist wesentlich (substantially) schneller als array_key_exists, 10000 Prüfungen dauert nur etwa halb so lange.
php  arrays  benchmarking  performance  webdevel 
february 2016 by kme
Support and Q&A for Solid-State Drives - Engineering Windows 7 - Site Home - MSDN Blogs
Should the pagefile be placed on SSDs?

Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.

In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that

Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.

In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
windows  performance  ssd  optimization  tipsandtricks 
december 2015 by kme
Is there a better way to determine elapsed time in Perl? - Stack Overflow
Possibly. Depends on what you mean by "better".

If you are asking for a "better" solution in terms of functionality, then this is pretty much it.

If you are asking for a "better" in the sense of "less awkward" notation, then know that, in scalar context, Time::HiRes::gettimeofday() will return floating seconds since epoch (with the fractional part representing microseconds), just like Time::HiRes::time() (which is a good drop-in replacement for the standard time() function.)
<code class="language-perl">

my $start = Time::HiRes::gettimeofday();
...
my $end = Time::HiRes::gettimeofday();
printf("%.2f\n", $end - $start);
</code>

or:
<code class="language-perl">
use Time::HiRes qw( time );

my $start = time();
...
my $end = time();
printf("%.2f\n", $end - $start);
</code>
perl  benchmarking  performance  tictock 
june 2015 by kme
« earlier      
per page:    204080120160

Copy this bookmark:





to read