Search Results

Search found 24220 results on 969 pages for 'performance tools'.

Page 147/969 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Building a CMS in PHP: Development tools

    - by TRiG
    I'm planning to build a CMS in PHP and MySQL, mainly for my own amusement and education. (Though who knows, I may come up with something useful and cool. Anything's possible.) I'll be asking questions about code architecture etc. later. For now, I'm more interested in development tools. So far, all my playing with code has been done on a web server, and I've edited over FTP. I was thinking it might be quicker to use a localhost. Also, that way, I could use version control (which I've never done before). So, A. How do I set up a localhost server with many subdomains on an Ubuntu 9.10 computer. Is XAMPP for Linux the way to go, or should I use a standard Apache distro? (Or another webserver altogether?) For that matter, is it possible to set up more than one webserver on the same computer, and to use them for different localhost subdomains? B. How do I set up a version control thingy covering all the code (which will be on several subdomains of localhost, and in a few shared folders)? I've read Joel Spolsky's HgInt tutorial, and it makes Mercurial look good. And simple, especially if you're working on your own. C. Should I continue to use gEdit to write HTML/CSS/JS/PHP, or is there a better free editor out there for these languages?

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • SEO google keyword position tools?

    - by Peterl86
    Hi guys, I want to check our google postions for several keywords every day and make a note in a spreadseet. At the moment, we have a student doing it but it's a rubbish job and it doesn't seem fair on them! Are there any tools available to automate this process? I have tried rankchecker by seobook.com, but although that should be exactly what im looking for when i set scheduled tasks in that, it doesnt work. Any tips would be appreciated, thanks! peter EDIT: Liam has suggested a Python script to do this, which unfortunately isnt something I'm very familiar with! If anyone knows of a good tutorial or something to help us with this, that would be brilliant. Update: Found a php script at seoscript.net which looks like a step in the right direction. But I cant get it to work! I get this error. Anyone more knowledgabe than me know how to fix that? I have PEAR installed. thanks again, Peter

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Linux software RAID6: rebuild slow

    - by Ole Tange
    I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does not interfere with measuring # sync ; echo 3 | tee /proc/sys/vm/drop_caches >/dev/null # time parallel -j0 "dd if=/dev/{} bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 7.30336 s, 144 MB/s [... similar for each disk ...] # time parallel -j0 "dd if=/dev/{} skip=15000000 bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 12.7991 s, 81.9 MB/s [... similar for each disk ...] So we can read sequentially at 140 MB/s in the outer tracks and 82 MB/s in the inner tracks on all the drives simultaneously. Sequential write performance is similar. This would lead me to expect a rebuild speed of 82 MB/s or more. # echo 800000 > /proc/sys/dev/raid/speed_limit_min # echo 800000 > /proc/sys/dev/raid/speed_limit_max # cat /proc/mdstat md2 : active raid6 sdbd[10](S) sdbc[9] sdbf[0] sdbm[8] sdbl[7] sdbk[6] sdbe[11] sdbj[4] sdbi[3](F) sdbh[2] sdbg[1] 27349121408 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/8] [UUU_UUUUU] [=========>...........] recovery = 47.3% (1849905884/3907017344) finish=855.9min speed=40054K/sec But we only get 40 MB/s. And often this drops to 30 MB/s. # iostat -dkx 1 sdbc 0.00 8023.00 0.00 329.00 0.00 33408.00 203.09 0.70 2.12 1.06 34.80 sdbd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdbe 13.00 0.00 8334.00 0.00 33388.00 0.00 8.01 0.65 0.08 0.06 47.20 sdbf 0.00 0.00 8348.00 0.00 33388.00 0.00 8.00 0.58 0.07 0.06 48.00 sdbg 16.00 0.00 8331.00 0.00 33388.00 0.00 8.02 0.71 0.09 0.06 48.80 sdbh 961.00 0.00 8314.00 0.00 37100.00 0.00 8.92 0.93 0.11 0.07 54.80 sdbj 70.00 0.00 8276.00 0.00 33384.00 0.00 8.07 0.78 0.10 0.06 48.40 sdbk 124.00 0.00 8221.00 0.00 33380.00 0.00 8.12 0.88 0.11 0.06 47.20 sdbl 83.00 0.00 8262.00 0.00 33380.00 0.00 8.08 0.96 0.12 0.06 47.60 sdbm 0.00 0.00 8344.00 0.00 33376.00 0.00 8.00 0.56 0.07 0.06 47.60 iostat says the disks are not 100% busy (but only 40-50%). This fits with the hypothesis that the max is around 80 MB/s. Since this is software raid the limiting factor could be CPU. top says: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 38520 root 20 0 0 0 0 R 64 0.0 2947:50 md2_raid6 6117 root 20 0 0 0 0 D 53 0.0 473:25.96 md2_resync So md2_raid6 and md2_resync are clearly busy taking up 64% and 53% of a CPU respectively, but not near 100%. The chunk size (128k) of the RAID was chosen after measuring which chunksize gave the least CPU penalty. If this speed is normal: What is the limiting factor? Can I measure that? If this speed is not normal: How can I find the limiting factor? Can I change that?

    Read the article

  • Performance of Silverlight Datagrid in Silverlight 3 vs Silverlight 4 on a mac

    - by Simon
    I'm using Silverlight Beta 4 for a LOB application. After finding out today that I'll have to wait perhaps 4 months to be able to develop with SL4 on Visual Studio 2010 I'm thinking I need to downgrade my application to SL3 but thats another question. The problem is I'm noticing absolutely abismal performance for simple datagrids that work just fine on a PC when I'm running on a Mac. These grids contain only 5-10 columns and maybe 50 rows. Paging up and down takes about 1-2 seconds sometimes. I would appreciate anybody's experience in which of the following is the best solution: reverting to Silverlight 3 and hoping DataGrid is faster switching to 3rd party datagrid such as Telerik forgetting silverlight altogether I was hoping that possibly SL4 runtime might be updated but that won't happen probably for 3-4 months. Just a reminder - this is specifically a mac issue. Performance on my PC while slightly slow to populate the grid initially is fine.

    Read the article

  • .NET or Windows Synchronization Primitives Performance Specifications

    - by ovanes
    Hello *, I am currently writing a scientific article, where I need to be very exact with citation. Can someone point me to either MSDN, MSDN article, some published article source or a book, where I can find performance comparison of Windows or .NET Synchronization primitives. I know that these are in the descending performance order: Interlocked API, Critical Section, .NET lock-statement, Monitor, Mutex, EventWaitHandle, Semaphore. Many Thanks, Ovanes P.S. I found a great book: Concurrent Programming on Windows by Joe Duffy. This book is written by one of the head concurrency developers for .NET Framework and is simply brilliant with lots of explanations, how things work or were implemented.

    Read the article

  • Logging strategy vs. performance

    - by vtortola
    Hi, I'm developing a web application that has to support lots of simultaneous requests, and I'd like to keep it fast enough. I have now to implement a logging strategy, I'm gonna use log4net, but ... what and how should I log? I mean: How logging impacts in performance? is it possible/recomendable logging using async calls? Is better use a text file or a database? Is it possible to do it conditional? for example, default log to the database, and if it fails, the switch to a text file. What about multithreading? should I care about synchronization when I use log4net? or it's thread safe out of the box? In the requirements appear that the application should cache a couple of things per request, and I'm afraid of the performance impact of that. Cheers.

    Read the article

  • iPhone Foundation - performance implications of mutable and xxxWithCapacity:0

    - by Adam Eberbach
    All of the collection classes have two versions - mutable and immutable, such as NSArray and NSMutableArray. Is the distinction merely to promote careful programming by providing a const collection or is there some performance hit when using a mutable object as opposed to immutable? Similarly each of the collection classes has a method xxxxWithCapacity, like [NSMutableArray arrayWithCapacity:0]. I often use zero as the argument because it seems a better choice than guessing wrongly how many objects might be added. Is there some performance advantage to creating a collection with capacity for enough objects in advance? If not why isn't the function something like + (id)emptyArray?

    Read the article

  • Java stuff you didn't know you didn't know

    - by pretzelstick
    What are some Java language features or Java related tools that you were embarrassed to find that you didn't know about? Examples for me are: jmap RuntimeException Please post things every Java programmer should know about, be they language features, tools, or facts about how the JVM works, but that you found out about long after you should have.

    Read the article

  • MSBuild appears to only use old output files for custom build tools

    - by sixlettervariables
    I have an ANTLR grammar file as part of a C# project file and followed the steps outlined in the User Manual. <Project ...> <PropertyGroup> <Antlr3ToolPath>$(ProjectDir)tools\antlr-3.1.3\lib</Antlr3ToolPath> <AntlrCleanupPath>$(ProjectDir)AntlrCleanup\$(OutputPath)</AntlrCleanupPath> </PropertyGroup> <ItemGroup> <Antlr3 Include="Grammar\Foo.g"> <OutputFiles>FooLexer.cs;FooParser.cs</OutputFiles> </Antlr3> <Antlr3 Include="Grammar\Bar.g"> <OutputFiles>BarLexer.cs;BarParser.cs</OutputFiles> </Antlr3> </ItemGroup> <Target Name="GenerateAntlrCode" Inputs="@(Antlr3)" Outputs="%(Antlr3.OutputFiles)"> <Exec Command="java -cp %22$(Antlr3ToolPath)\antlr-3.1.3.jar%22 org.antlr.Tool -message-format vs2005 @(Antlr3Input)" Outputs="%(Antlr3Input.OutputFiles)" /> <Exec Command="%22$(AntlrCleanupPath)\AntlrCleanup.exe%22 @(Antlr3Input) %(Antlr3Input.OutputFiles)" /> </Target> <ItemGroup> <!-- ...other files here... --> <Compile Include="Grammar\FooLexer.cs"> <AutoGen>True</AutoGen> <DesignTime>True</DesignTime> <DependentUpon>Foo.g</DependentUpon> </Compile> <Compile Include="Grammar\FooParser.cs"> <AutoGen>True</AutoGen> <DesignTime>True</DesignTime> <DependentUpon>Foo.g</DependentUpon> </Compile> <!-- ... --> </ItemGroup> </Project> For whatever reason, the Compile steps only use old versions of the code, no amount of tweaking appears to help.

    Read the article

  • A way to measure performance

    - by Andrei Ciobanu
    Given Exercise 14 from 99 Haskell Problems: (*) Duplicate the elements of a list. Eg.: *Main> dupli''' [1..10] [1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10] I've implemented 4 solutions: {-- my first attempt --} dupli :: [a] -> [a] dupli [] = [] dupli (x:xs) = replicate 2 x ++ dupli xs {-- using concatMap and replicate --} dupli' :: [a] -> [a] dupli' xs = concatMap (replicate 2) xs {-- usign foldl --} dupli'' :: [a] -> [a] dupli'' xs = foldl (\acc x -> acc ++ [x,x]) [] xs {-- using foldl 2 --} dupli''' :: [a] -> [a] dupli''' xs = reverse $ foldl (\acc x -> x:x:acc) [] xs Still, I don't know how to really measure performance . So what's the recommended function (from the above list) in terms of performance . Any suggestions ?

    Read the article

  • NSURLConnection performance

    - by oksk
    Hi all, I'm using NSURLConnection for downloading some images in my app currently. Before implementing via this, I implemented it by NSData(dataWithContentOfURL) in NSThread. But I wanted to cancel during downloading images, So I changed it to NSURLConnection. But It happens other problem. Performance was very low after changing. For example, There is at least 5seconds for downloading images at NSThread(NSData async) But, There is more than 2 or 3 times than it at NSURLConnection(async) !! Can I enhance performance ?? How?? (* sorry about my question with NSData(dataWithContentOfFile). correct question is dataWithContentOfURL)

    Read the article

  • Jquery Tools Overlay call on javascript function

    - by user342944
    Hi Guys, I have a jquery overlay window in my html alongwith a button to activate here is the code. <!-- Validation Overlay Box --> <div class="modal" id="yesno" style="top: 100px; left: 320px; position: relative; display: none; z-index: 0; margin-top: 100px;"> <h2> &nbsp;&nbsp;Authentication Failed</h2> <p style="font-family:Arial; font-size:small; text-align:center"> Ether your username or password has been entered incorrectly. Please make sure that your username or password entered correctly... </p> <!-- yes/no buttons --> <p align="center"> <button class="close"> OK </button> </p> </div> Here is the Jquery tools Script <SCRIPT> $(document).ready(function() { var triggers = $(".modalInput").overlay({ // some mask tweaks suitable for modal dialogs mask: { color: '#a2a2a2', loadSpeed: 200, opacity: 0.9 }, closeOnClick: false }); var buttons = $("#yesno button").click(function(e) { // get user input var yes = buttons.index(this) === 0; // do something with the answer triggers.eq(0).html("You clicked " + (yes ? "yes" : "no")); }); $("#prompt form").submit(function(e) { // close the overlay triggers.eq(1).overlay().close(); // get user input var input = $("input", this).val(); // do something with the answer triggers.eq(1).html(input); // do not submit the form return e.preventDefault(); }); }); And here how its called upon clicking on a button or a hyperlink "<BUTTON class="modalInput" rel="#yesno">You clicked no</BUTTON>" All i want is I dont want to show the overlay on clicking on button or through a link. Is it possible to call it through a javascript function like "showOverlay()" ??

    Read the article

  • Performance benefits of upgrading Richfaces to newer version

    - by peteDog
    I have a client that's running an application based on JBoss 4.0.5, Seam 1.2 and RichFaces 3.0.1. Their system is having performance problems due to the fact that a lot of data is coming back from the server to be displayed on screen and it seems like the rendering of that data is taking forever. The data brought back is displayed in a tabbed interface, but the tabs aren't currently being loaded individually, but all at once. I'm trying to build up a case to present to the client on the benefits of upgrading to never version of RichFaces, which, as I understand it, has added a great number of features related to tabbed panels and being able to use ajax to page the data and load the chunks you actually need to display at the moment, and not the rest that's in other tabs. The move to a newer version of RichFaces will also result in never versions of Jboss and Seam, as the current production build of RichFaces 3.2.1 requires JSF 1.2. IF anyone has some suggestions or experience on performance of current versions RichFaces, paging, etc, I would really appreciate some feedback.

    Read the article

  • Cost of exception handlers in Python

    - by Thilo
    In another question, the accepted answer suggested replacing a (very cheap) if statement in Python code with a try/except block to improve performance. Coding style issues aside, and assuming that the exception is never triggered, how much difference does it make (performance-wise) to have an exception handler, versus not having one, versus having a compare-to-zero if-statement?

    Read the article

  • Virtual Earth Shape Rendering Performance

    - by Mike
    I am overlaying a transparent image on my VEMap control by rendering it as a single VEShape. The shape changes sizes dynamically depeding on the zoom level of my map and can be as large as 4000*4000px. In older browsers such as IE6 and early versions of Firefox 2.x, map control performance degrades rapidly when my shape gets larger than 1500*1500px. The mouse pointer moves slowly and the map responds very slowly to events. I don't see this issue at all in newer browsers (IE7+). Are there any workarounds to boost performance of rendering a large shape for IE6 users?

    Read the article

  • How does glClear() improve performance?

    - by Jon-Eric
    Apple's Technical Q&A on addressing flickering (QA1650) includes the following paragraph. (Emphasis mine.) You must provide a color to every pixel on the screen. At the beginning of your drawing code, it is a good idea to use glClear() to initialize the color buffer. A full-screen clear of each of your color, depth, and stencil buffers (if you're using them) at the start of a frame can also generally improve your application's performance. On other platforms, I've always found it to be an optimization to not clear the color buffer if you're going to draw to every pixel. (Why waste time filling the color buffer if you're just going to overwrite that clear color?) How can a call to glClear() improve performance?

    Read the article

  • Cant get flexigrid to work in Jquery Tools Tab

    - by John
    Im fairly new to jquery and Im using jquery tools for tabs. I am wanting in one of the tabs to show flexigrid, but when I try to do this flexigrid does not show up, its just blank. If I setup flexigrid in a stand alone page outside the tab it works just fine. Below is the code that isnt working. Again Im new so please go easy! <ul class="css-tabs"> <li><a href="#details">Account Details</a></li> <li><a href="#accounts">Sub Accounts</a></li> <li><a href="#groups">Groups</a></li> <li><a href="#support">Tickets</a></li> </ul> <div class="css-panes"> <div>Tab 1</div> <div><table id="flex1" style="display:none"></table></div> <div>Tab 3</div> <div>Tab 4</div> </div> <script> $(function() { $("ul.css-tabs").tabs("div.css-panes > div").history(); }); $('.flexme1').flexigrid(); $('.flexme2').flexigrid({height:'auto',striped:false}); $("#flex1").flexigrid ( { url: '/accounts_list.php', dataType: 'json', colModel : [ {display: 'ID', name : 'id', width : 45, sortable : true, align: 'center'}, {display: 'Username', name : 'username', width : 120, sortable : true, align: 'left'}, {display: 'Display Name', name : 'displayname', width : 150, sortable : true, align: 'left'}, {display: 'Limit', name : 'accounts', width : 50, sortable : true, align: 'center'}, {display: 'Rate', name : 'charge', width : 50, sortable : true, align: 'center'}, {display: 'Subs', name : 'subcount', width : 50, sortable : true, align: 'center'} ], searchitems : [ {display: 'ID', name : 'id'}, {display: 'Username', name : 'username', isdefault: true}, {display: 'Display Name', name : 'displayname'} ], sortname: "id", sortorder: "desc", usepager: true, singleSelect: true, title: 'Test', useRp: true, rp: 20, showTableToggleBtn: false, width: 500, height: 250 }); </script>

    Read the article

  • Entity Framework Performance Problem

    - by Steve Horn
    I'm hoping that someone can help me understand how to overcome a performance problem I'm running into with the latest version of the Entity Framework. In my test, I created my model from a database consisting of around 80 tables. The problem that I'm running into is that the cost of the very first query I run on a thread is very expensive. If I run without pre-compiling views the first query takes anywhere from 5800 to 6600 milliseconds. If I pre-compile the views (see this article) I can get the initial query cost down to about 2800 to 3200 milliseconds. 3 seconds for each request is still unacceptable for my needs. Subsequent queries are very fast. Can you please help me understand how to eliminate the poor performance of the initial query? I'm using the version of entity framework that ships with Visual Studio 2010 RC.

    Read the article

  • Tools to help with analysing log files

    - by peter
    I am developing a C# .NET application. In the app.config file I add trace logging as shown, <?xml version="1.0" encoding="UTF-8" ?> <configuration> <system.diagnostics> <trace autoflush="true" /> <sources> <source name="System.Net.Sockets" maxdatasize="1024"> <listeners> <add name="MyTraceFile"/> </listeners> </source> </sources> <sharedListeners> <add name="MyTraceFile" type="System.Diagnostics.TextWriterTraceListener" initializeData="System.Net.trace.log" /> </sharedListeners> <switches> <add name="System.Net" value="Verbose" /> </switches> </system.diagnostics> </configuration> Are there any good tools around to analyse the log file that is output? The output looks like this, System.Net.Sockets Verbose: 0 : [5900] Data from Socket#8764489::Send DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000000 : 4D 49 4D 45 2D 56 65 72-73 69 6F 6E 3A 20 31 2E : MIME-Version: 1. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000060 : 65 3A 20 37 20 41 70 72-20 32 30 31 30 20 31 35 : e: 7 Apr 2010 15 DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000070 : 3A 32 32 3A 34 30 20 2B-31 32 30 30 0D 0A 53 75 : :22:40 +1200..Su DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000080 : 62 6A 65 63 74 3A 20 5B-45 72 72 6F 72 5D 20 45 : bject: [Error] E DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000090 : 78 63 65 70 74 69 6F 6E-20 69 6E 20 53 79 6E 63 : xception in Sync DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000A0 : 53 65 72 76 69 63 65 20-28 32 30 30 38 2E 30 2E : Service (2008.0. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000B0 : 33 30 34 2E 31 32 33 34-32 29 0D 0A 43 6F 6E 74 : 304.12342)..Cont DateTime=2010-04-07T03:22:40.1067012Z Is there anything that can take the output shown above (my output is a text file 100mb in size), group together packets, and help out with finding particular issues I would like to hear about it. Thanks.

    Read the article

  • Switch from Microsofts STL to STLport

    - by Laserallan
    Hi! I'm using quite much STL in performance critical C++ code under windows. One possible "cheap" way to get some extra performance would be to change to a faster STL library. According to this post STLport is faster and uses less memory, however it's a few years old. Has anyone made this change recently and what were your results?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >