Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 335/3203 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Calculate minimum ext3 partition size for certain amount of data

    - by Daniel Beck
    These following ext3 partitions contain identical data. As we can see, the larger the partition size, the more space is required for the same files: Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop11 3965777 561064 3199964 15% [...] /dev/loop19 573029 543843 29186 95% [...] Filesystem Size Used Avail Use% Mounted on /dev/loop11 3.8G 548M 3.1G 15% [...] /dev/loop19 560M 532M 29M 95% [...] Filesystem Inodes IUsed IFree IUse% Mounted on /dev/loop11 1024000 1656 1022344 1% [...] /dev/loop19 1024000 1656 1022344 1% [...] I start with a partition of fixed size that possibly wasted a lot of space and I want to create a partition that is able to hold that data but with (almost) minimal size. How can I reliably calculate that minimal partition size needed for storing a certain amount of data? The amount of data changes over time, and I need to automate these calculations.

    Read the article

  • ft_stopword_file not picked up

    - by Alex Holsgrove
    I have a VPS server with a company called Webfusion. I want to remove some or all of the FULLTEXT stopwords because some specific words needs to be searchable with my DB content. I opened /etc/mysql/my.cnf and added the line ft_stopword_file="". I restarted the mysql service, ran a repair table and then tried my MATCH query with no success. I ran SHOW VARIABLES LIKE 'ft_%' and it simply shows (built-in) next to the stopword file. I am running WAMP on my workstation, and whilst I realise this isn't configured the same as a commercial VPS, the above method worked just fine. Couple someone please offer some guidance?

    Read the article

  • Enabling DMA in pc with windows 7

    - by Mugen
    I was checking my pc settings using AIDA64 (supposed to be the successor for Everest - it basically shows you detailed hardware you currently have). For my ATA hard disk it shows the setting for DMA as "supported, disabled". But when I checked the windows setting I see that it is actually enabled. How can I find out which is correct? And if its disabled what do I do to enable it? Thanks for your help. Here are some screenshots for this:

    Read the article

  • Nearest color algorithm using Hex Triplet

    - by Lijo
    Following page list colors with names http://en.wikipedia.org/wiki/List_of_colors. For example #5D8AA8 Hex Triplet is "Air Force Blue". This information will be stored in a databse table (tbl_Color (HexTriplet,ColorName)) in my system Suppose I created a color with #5D8AA7 Hex Triplet. I need to get the nearest color available in the tbl_Color table. The expected anaser is "#5D8AA8 - Air Force Blue". This is because #5D8AA8 is the nearest color for #5D8AA7. Do we have any algorithm for finding the nearest color? How to write it using C# / Java? REFERENCE http://stackoverflow.com/questions/5440051/algorithm-for-parsing-hex-into-color-family http://stackoverflow.com/questions/6130621/algorithm-for-finding-the-color-between-two-others-in-the-colorspace-of-painte Suggested Formula: Suggested by @user281377. Choose the color where the sum of those squared differences is minimal (Square(Red(source)-Red(target))) + (Square(Green(source)-Green(target))) +(Square(Blue(source)-Blue(target)))

    Read the article

  • perf tuning for vmfs3 on RAID

    - by maruti
    recommendations for ESX4 OS - VMFS version3: matching: RAID-5 stripe size with VMFS block size? (64K, 128K etc) enabled "adaptive read ahead, write-back" on PERC 6i 90% VMs on server are Windows (2008, 2003, Vista etc, SQL 2005 etc) i have read that smaller stipes are good for writes and larger for reads. Since this is virtual env, not sure whats good.

    Read the article

  • Using Microsoft Excel as a Source and a Target in Oracle Data Integrator

    - by julien.testut
    The posts in this series assume that you have some level of familiarity with ODI. The concepts of Models, Datastores, Logical Schema, Knowledge Modules and Interfaces are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for more details. Recently we saw how to create a create a connection to Microsoft Excel let's now take a look at how we can use Microsoft Excel as a source or a target in ODI interfaces. Create a Model in Designer First we need to create a new Model and a datastore for our Microsoft Excel spreadsheet. In Designer open up the Models view and insert a new Model. Give a name to your model, I used EXCEL_SRC_CITY.

    Read the article

  • Hardware profiling [closed]

    - by mgroves
    I'd like to upgrade my computer so that it's faster when editing/rendering video. I'm thinking of first getting a faster hard drive, but I'd like to be able to run some sort of profiling software to tell me what the bottleneck is when rendering video. Any suggestions about software that can do this for preferably Windows XP and preferably for free?

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Vim configuration slow in Terminal & iTerm2 but not in MacVim

    - by Jey Balachandran
    Ideally, I want to use Vim from Terminal or iTerm2. However, it becomes unbearably slow so I had to resort to using MacVim. There is nothing wrong with MacVim, however my workflow would be much smoother if I used only Terminal/iTerm2. When its slow Loading files, in particular Rails files takes about 1 - 1.5s. Removing rails.vim decreases this time to 0.5 - 1s. In MacVim this is instantaneous. Scrolling through the rows and columns via h, j, k, l. It progressively gets slower the longer I hold down the keys. Eventually, it starts jumping rows. I have my Key Repeat set to Fast and Delay Until Repeat set to Short. After 10 - 15 minutes of usage, using plugins such as ctrlp or Command-T gets very laggy. I'd type a letter, wait 2 - 3s, then type the next. My Setup 11" MacBook Air running Mac OS X Version 10.7.3 (1.6 Ghz Intel Core 2 Duo, 4 GB DDR3) My dotfiles. > vim --version VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 16 2011 16:44:23) MacOS X (unix) version Included patches: 1-333 Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra -perl +persistent_undo +postscript +printer +profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/Cellar/vim/7.3.333/share/vim" Compilation: /usr/bin/llvm-gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O3 -march=core2 -msse4.1 -w -pipe -D_FORTIFY_SOURCE=1 Linking: /usr/bin/llvm-gcc -L. -L/usr/local/lib -o vim -lm -lncurses -liconv -framework Cocoa -framework Python -lruby I've tried running without any plugins or syntax highlighting. It opens files a lot faster but still not as fast as MacVim. But the other two problems still exist. Why is my vim configuration slow? How can I improve the speed of my vim configuration within Terminal or iTerm2?

    Read the article

  • How can I clear *old* browsing data from Google Chrome Linux, while keeping more recent data?

    - by Norman Ramsey
    I can find plenty of information on how to clear Google Chrome's recent browsing data, in various periods, as well as clearing all browsing data. But I want to clear old browsing data—say for a start, anything over two months old. (I'm trying to save space on a crowded laptop.) Does anyone know any principled way to do this, or shall I just dive into ~/.config/google, start removing likely-looking files, and hope for the best. I run Google Chrome on Debian Linux.

    Read the article

  • Sending UDP/514 data magically appears in syslog without rsyslog running

    - by ale
    I’m using a programming language without a library to log to rsyslog over UDP. I thought I was going to need to write a library but I discovered something weird. If I send data on UDP/514 with the port open on the server then the data appears in the server’s syslog. rsyslogd isn’t running so syslog isn’t doing this. Data doesn’t get formatted into a syslog message so rsyslogd really isn’t doing this (only raw text enters syslog). Linux must see the data coming in on this port and know that it should go into /var/log/messages? If I do the same on another port (e.g. UDP/515) then nothing appears in the log! What is doing this? Some CentOS feature? The kernel?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • The downsides of using nginx as a primary web server?

    - by FractalizeR
    Hello. I've seen millions of websites using nginx as a proxifying webserver working together with Apache. But I've seen very few servers running nginx only as their default webserver. What are the main downsides of such config? I can see some: Inability to use per-directory config files like .htaccess so every configuration change should be done to main server config file and requires server reload. But pecl htscanner can compensate them for php settings Unavailability of mod_php for nginx, which can be compensated by php-fpm for example. What are others? Why don't people just drop Apache and move to nginx or any other lightweight solution? May be, there are some special reasons?

    Read the article

  • Can't get any SMART or temperature data from HDDs

    - by Regs
    I have a PC with recently installed Gigabyte GA-X79-UD5 MB. I've encountered some weird problem with getting SMART data or temperature for HDDs. Every single tool I've tried in Windows 7 just can't get any data (HDTune, AIDA64...). I was suspecting that SMART feature is disabled in BIOS but it's seems like there is no such option in BIOS settings. I've even tried to update BIOS but still no luck. Same issue with both controllers on that MB (Intel and Marvell). It seems unlikely that both controllers end up with exact same issue. Both controllers are working in AHCI mode. Is there anythig that can interfere with getting SMART ant temp data from HDDs? Or is there any way to check that it's actuall MB issue? Is it even possible that it is hardware issue since all HDDs seems to work normal despite the fact that I can't get any temperature or SMART data from it.

    Read the article

  • svchost.exe @ 100% disk utilization vs. Outlook.ost

    - by Aszurom
    Vista x32 box with Outlook 2007. Outlook is not running. Hasn't been fired up for several reboots. I stopped WMI service and Windows Search service. Machine is mostly quiet, and then servicehost.exe launches an instance and starts banging away at Outlook.ost file. I can't determine what is causing it. I'm watching it in processmon, and trying to investigate it with preocessexplorer. Not having much luck at figuring out why the machine is so interested in that file. NOTHING is running that should be touching it.

    Read the article

  • Can I configure mod_proxy to use different parameters based on HTTP Method?

    - by Graham Lea
    I'm using mod_proxy as a failover proxy with two balance members. While mod_proxy marks dead nodes as dead, it still routes one request per minute to each dead node and, if it's still dead, will either return 503 to the client (if maxattempts=0) or retry on another node (if it's 0). The backends are serving a REST web service. Currently I have set maxattempts=0 because I don't want to retry POSTs and DELETEs. This means that when one node is dead, each minute a random client will receive a 503. Unfortunately, most of our clients are interpreting codes like 503 as "everything is dead" rather than "that didnt work but please try that again". In order to program some kind of automatic retry for safe requests at the proxy layer, I'd like to configure mod_proxy to use maxattempts=1 for GET and HEAD requests and maxattempts=0 for all other HTTP Methods. Is this possible? (And how? :)

    Read the article

  • Excel: making line charts so the line goes through all data points

    - by Mike
    Hi I've got data based on over 50+ years for various products. Unfortunately not all products have data for each year. I've created a line chart to show the movement (quantity sold) of these products over the years. It works well, except where the data points are too far apart i.e. 1965 and then 1975. For some reason there is no line. It's not perfect data because of the missing years, but I can live with that, I just want to see the trend, and not just sporadic dots; squares or crosses. Any help or links greatly appreciated. Mike

    Read the article

  • StreamInsight on the Brain - can you help?

    - by sqlartist
    I just came across this guy who is once again in the news as the world's first cyborg. I read all about this research some years back when he implanted a chip into his arm to allow him to open doors in his research lab. Now, without really advancing the research he is claiming that a virus could be implanted onto these implanted devices. Captain Cyborg sidekick implants virus-infected chip - http://www.theregister.co.uk/2010/05/26/captain_cyborg_cyberfud/ This is of interest to me as I actually...(read more)

    Read the article

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >