Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 74/3203 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • How can I improve the performance of this algorithm

    - by Justin
    // Checks whether the array contains two elements whose sum is s. // Input: A list of numbers and an integer s // Output: return True if the answer is yes, else return False public static boolean calvalue (int[] numbers, int s){ for (int i=0; i< numbers.length; i++){ for (int j=i+1; j<numbers.length;j++){ if (numbers[i] < s){ if (numbers[i]+numbers[j] == s){ return true; } } } } return false; }

    Read the article

  • C/C++ variable length automatic array performance

    - by aaa
    hello. Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc range of N maybe from 1kb to 16kb roughly, stack overrun is not a problem Thank you

    Read the article

  • PHP array performance

    - by dfo
    Hi, this is my first question on Stackoverflow, please bear with me. I'm testing an algorithm for 2d bin packing and I've chosen PHP to mock it up as it's my bread-and-butter language nowadays. As you can see on http://themworks.com/pack_v0.2/oopack.php?ol=1 it works pretty well, but you need to wait around 10-20 seconds for 100 rectangles to pack. For some hard to handle sets it would hit the php's 30s runtime limit. I did some profiling and it shows that most of the time my script goes through different parts of a small 2d array with 0's and 1's in it. It either checks if certain cell equals to 0/1 or sets it to 0/1. It can do such operations million times and each times it takes few microseconds. I guess I could use an array of booleans in a statically typed language and things would be faster. Or even make an array of 1 bit values. I'm thinking of converting the whole thing to some compiled language. Is PHP just not good for it? If I do need to convert it to let's say C++, how good are the automatic converters? My script is just a lot of for loops with basic arrays and objects manipulations. Thank you! Edit. This function gets called more than any other. It reads few properties of a very simple object, and goes through a very small part of a smallish array to check if there's any element not equal to 0. function fits($bin, $file, $x, $y) { $flag = true; $xw = $x + $file->get_width();; $yh = $y + $file->get_height(); for ($i = $x; $i < $xw; $i++) { for ($j = $y; $j < $yh; $j++) { if ($bin[$i][$j] !== 0) { $flag = false; break; } } if (!$flag) break; } return $flag; }

    Read the article

  • Which should I use? (performance)

    - by Yim
    I want to know a simple thing: when setting up a style that is inherited by all its children, is it recommended most specific? (even if you don't care others having this style) Structure: html body parent_content wrapper p I don't care having parent_content or wrapper having the style I do care changing the html or body style (or all p) So what should I use? #parent_content{ color:#555; } #parent_content p{ color:#555; } #wrapper{ color:#555; } ... Also, some links to tutorials about this would be great

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Better way to load level content in XNA?

    - by user2002495
    Currently I loaded all my assets in XNA in the main Game class. What I want to achieve later is that I only load specific assets for specific levels (the game will consist of many levels). Here is how I load my main assets into the main class: protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); plane = new Player(Content.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; lvl1 = new Level1(Content.Load<Texture2D>(@"Levels/bgLvl1"), Content.Load<Texture2D>(@"Levels/bgLvl1-other"), new Vector2(0, 0), new Vector2(0, -600)); CommonBullet.LoadContent(Content); CommonEnemyBullet.LoadContent(Content); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); plane.Update(gameTime); lvl1.Update(gameTime); foreach (CommonEnemy ce in cel) { if (ce.CollidesWith(plane)) { ce.hasSpawn = false; } foreach (CommonBullet b in plane.commonBulletList) { if (b.CollidesWith(ce)) { ce.hasSpawn = false; } } ce.Update(gameTime); } LoadCommonEnemy(); base.Update(gameTime); } private void LoadCommonEnemy() { int randY = rand.Next(-600, -10); int randX = rand.Next(0, 750); if (cel.Count < 3) { cel.Add(new CommonEnemy(Content.Load<Texture2D>(@"Enemy/Common/commonEnemySprite"), 7, 2, "left", randX, randY)); } for (int i = 0; i < cel.Count; i++) { if (!cel[i].hasSpawn) { cel.RemoveAt(i); i--; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(); lvl1.Draw(spriteBatch); plane.Draw(spriteBatch); foreach (CommonEnemy ce in cel) { ce.Draw(spriteBatch); } spriteBatch.End(); base.Draw(gameTime); } I wish to load my players, enemies, all in Level1 class. However, when I move my player & enemy code into the Level1 class, the gameTime returns null. Here is my Level1 class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Input; using SpaceShooter_Beta.Animation.PlayerCollection; using SpaceShooter_Beta.Animation.EnemyCollection.Common; namespace SpaceShooter_Beta.Levels { public class Level1 { public Texture2D bgTexture1, bgTexture2; public Vector2 bgPos1, bgPos2; public float speed = 5f; Player plane; public Level1(Texture2D texture1, Texture2D texture2, Vector2 pos1, Vector2 pos2) { this.bgTexture1 = texture1; this.bgTexture2 = texture2; this.bgPos1 = pos1; this.bgPos2 = pos2; } public void LoadContent(ContentManager cm) { plane = new Player(cm.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; } public void Draw(SpriteBatch sb) { sb.Draw(bgTexture1, bgPos1, Color.White); sb.Draw(bgTexture2, bgPos2, Color.White); plane.Draw(sb); } public void Update(GameTime gt) { bgPos1.Y += speed; bgPos2.Y += speed; if (bgPos1.Y >= 600) { bgPos1.Y = -600; } if (bgPos2.Y >= 600) { bgPos2.Y = -600; } plane.Update(gt); } } } Of course when I did this, I delete all my player's code in the main Game class. All of that works fine (no errors) except that the game cannot start. The debugger says that plane.Update(gt); in Level 1 class has null GameTime, same thing with the Draw method in the Level class. Please help, I appreciate for the time. [EDIT] I know that using switch in the main class can be a solution. But I prefer a cleaner solution than that, since using switch still means I need to load all the assets through the main class, the code will be A LOT later on for each levels

    Read the article

  • Will logging debugging incur a performance hit if I don't turn debugging on?

    - by romandas
    On a Cisco device, I know that enabling debugging can incur a performance hit since debugging has such a high priority on the CPU. I know that to log debugging, you have to set logging up to the debugging level (logging buffered 4096 debugging, for example) and also enable debugging on some feature. Does configuring the logging debugging incur the performance hit even if you don't enable debugging on some feature, or would it be safe (assuming you want and can handle all the logging events via syslog) to configure 'logging buffered 4096 debugging' to have maximum logging available if/when someone uses debug?

    Read the article

  • Chef: nested data bag data to template file returns "can't convert String into Integer"

    - by Dalho Park
    I'm creating simple test recipe with a template and data bag. What I'm trying to do is creating a config file from data bag that has simple nested information, but I receive error "can't convert String into Integer" Here are my setting file 1) recipe/default.rb data1 = data_bag_item( 'mytest', 'qa' )['test'] data2 = data_bag_item( 'mytest', 'qa' ) template "/opt/env/test.cfg" do source "test.erb" action :create_if_missing mode 0664 owner "root" group "root" variables({ :pepe1 = data1['part.name'], :pepe2 = data2['transport.tcp.ip2'] }) end 2)my data bag named "mytest" $knife data bag show mytest qa id: qa test: part.name: L12 transport.tcp.ip: 111.111.111.111 transport.tcp.port: 9199 transport.tcp.ip2: 222.222.222.222 3)template file test.erb part.name=<%= @pepe1 % transport.tcp.binding=<%= @pepe2 % Error reurns when I run chef-client on my server, [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in from_file' [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in[]',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in block in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:infrom_file' [2013-06-24T19:50:38+00:00] DEBUG: backtrace entry for compile error: '/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in `[]'' [2013-06-24T19:50:38+00:00] DEBUG: Line number of compile error: '19' Recipe Compile Error in /var/chef/cache/cookbooks/config_test/recipes/default.rb TypeError can't convert String into Integer Cookbook Trace: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []' /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file' /var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in `from_file' Relevant File Content: /var/chef/cache/cookbooks/config_test/recipes/default.rb: 12: template "/opt/env/test.cfg" do 13: source "test.erb" 14: action :create_if_missing 15: mode 0664 16: owner "root" 17: group "root" 18: variables({ 19 :pepe1 = data1['part.name'], 20: :pepe2 = data2['transport.tcp.ip2'] 21: }) 22: end 23: I tried many things and if I comment out "pepe1 = data1['part.name'],", then :pepe2 = data2['transport.tcp.ip2'] works fine. only nested data "part.name" cannot be set to @pepe1. Does anyone knows why I receive the errors? thanks,

    Read the article

  • pure-ftpd debian, can't get www-data user working

    - by lynks
    I'm trying to add FTP access to the apache web files, in the past I have done this with an ftpuser and group arrangement. This time I would like to make it possible to login directly as www-data (the default apache user on debian) to make things a bit cleaner. I have checked and re-checked all the common issues; MinUID is set to 1 (www-data has uid 33) www-data has shell set to /bin/bash in /etc/passwd PAMAuthentication is off UnixAuthentication is on I have restarted pure-ftpd using /etc/init.d/pure-ftpd restart My resulting pure-ftpd run is; /usr/sbin/pure-ftpd -l unix -A -Y 1 -u 1 -E -O clf:/var/log/pure-ftpd/transfer.log -8 UTF-8 -B My syslog contains; Oct 7 19:46:40 Debian-60-squeeze-64 pure-ftpd: ([email protected]) [WARNING] Can't login as [www-data]: account disabled And my ftp client is giving me; 530 Sorry, but I can't trust you Am I missing something obvious?

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • Any "Magic Tricks" For Getting Data Back After Windows 7 Install

    - by user163757
    My old man installed Windows 7 without making a proper backup, and now realizes he left behind some important data. He did a true "clean install", so there is no Windows.old folder in the root directory. However, I believe the format performed on the hard drive was only a quick format, so I am hoping there is some chance at data recovery. I took his hard drive out, and have spent a majority of the weekend researching data recovery options. I paid $70 for the GetDataBack software, but have had little success with it. I can see all of the files I want to restore, however they appear corrupt when I try to open them. With that all being said, does anyone know of a viable way to recover some of this data, or is it a lost cause all together?

    Read the article

  • raid 0 data recovery?

    - by Fred
    HI All, I have two identical seagate 7200.9 500Gb drives confiured as a RAID 0 spanned disk in windows. One of the drives has lost power and wont spin up at all. I know this normally means death for the data on both drives but i have a cunning plan.. DISK 1 - NO POWER RAID 0 DISK DISK 2 - FULLY FUNCTIONAL RAID 0 DISK DISK 3 - FULLY FUNCTIONAL SPARE DISK Copy the working drive (disk 2) data to a third 500GB DISK (disk 3), remove the logic board from the working disk (disk 2) and replace it with the non working logic board on the broken drive (disk 1) , then hopefully recreate the RAID 0 with disk 1 and disk 3, just long enough to get the data off it. Hope this makes sense, here are my questions: Windows disk manager atm recognises disk 2 but wont let me access it in anyway, therefore copying the data off it (or getting a disk image) cant be done in windows. Does anyone know of any software (in linux or self booting) that would allow me to access this disk? Anyone know of any software that will recreate the spanned drive off two disk images Am i missing any key information that means i definitely shouldn't even bother starting this, i know its a long shot anyway but its worth a try unless i definitely cant do it. The irritating thing is that i am sure its a logic board failure on disk 1 as it simply wont power up at all, suddenly no signs of life, so i am sure the data is intact! Any help would be really appreciated! Thanks

    Read the article

  • How to increase performance of Acer Aspire One 751h netbook?

    - by Wolfarian
    Hello! I have bought my new netbook Acer Aspire One 751h some days ago and was very unpleased with it performance - videotalking in skype is almost unuseable, watching videos on YouTube(even in standart definition) is like watching slideshow and all netbook have increadible lags if I'm running more then 4-5 programms in one time. So, can somebody tell me how to impruve the performance of the netbook(OS - WinXP SP3)? And can you say me where to control power managment, please? Thank you!

    Read the article

  • Data take on with Drupal 6

    - by Robert MacLean
    We are migrating our current intranet to Drupal 6 and there is a lot of data within the current system which can be classified into: List data, general lists of fields. Common use is phone list of the employees phone numbers. Document repository. Just basically a web version of a file share for documents. I can easily get the data + meta infomation out, but how do I bulk upload the two types of data into Drupal, as uploading the hundred of thousands of items manually is just not acceptable.

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • Summing up spreadsheet data when a column contains “#N/A”

    - by Doris
    I am using Goggle Spreadsheet to work up some historical stock data and I use a Google function (=googlefinance=…) to import the historical closing prices for a stock, then I work with that data further. But, in that list of data generated from the =googlefinance=… function, one of the amounts comes up as #N/A. I don’t know why, but it happens for various symbols that I have tried. When I use a max function on the array, which includes the N/A line, the max function does not come up with anything but an N/A, so the N/A throws off any further functions. I thought I’d create a second column to the right of the imported data in which I can give it an IF function, something like, If ((A1 <0), "0", A1), with the expectation that it would return 0 if cell A1 is the N/A, and the cell value if it is not N/A. However, this still returns N/A. I also tried an IS BLANK function but that resulted in the same NA. Does anyone have any suggestions for a workaround to eliminate the N/A from an array of numbers that I am trying to work with?

    Read the article

  • Does chunk size affect the read performance of a Linux md software RAID1 array?

    - by OldWolf
    This came up in relation to this question on determining chunk size of an existing RAID array. The general consensus seems to be that chunk size does not apply to RAID1 as it is not striped. On the other hand, the Linux RAID Wiki claims that it will have an affect on read performance. However, I cannot find any benchmarks testing/proving that. Can anyone point to conclusive documentation that it either does or does not affect read performance?

    Read the article

  • Performance boost for MacBook: Hybrid hard drive or 4GB RAM?

    - by user13572
    I have an aluminium 13" MacBook with 2GB or RAM and 5400RPM 500GB hard drive. The main tasks I perform are developing iPhone and Mac apps in Xcode and websites in Coda. I want to improve the performance so I am considering buying 4GB of RAM or a 500GB Seagate solid-state hybrid drive. What is likely to provide the biggest performance boost?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >