Search Results

Search found 22447 results on 898 pages for 'cpu load'.

Page 598/898 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • What happened to the .NET version definition with v4.0?

    - by Tom Tresansky
    I'm building a C# class library, and using the beta 2 of Visual Web Developer/Visual C# 2010. I'm trying to save information about what version of .NET the library was built under. In the past, I was able to use this: // What version of .net was it built under? #if NET_1_0 public const string NETFrameworkVersion = ".NET 1.0"; #elif NET_1_1 public const string NETFrameworkVersion = ".NET 1.1"; #elif NET_2_0 public const string NETFrameworkVersion = ".NET 2.0"; #elif NET_3_5 public const string NETFrameworkVersion = ".NET 3.5"; #else public const string NETFrameworkVersion = ".NET version unknown"; #endif So I figured I could just add: #elif NET_4_0 public const string NETFrameworkVersion = ".NET 4.0"; Now, in Project-Properties, my target Framework is ".NET Framework 4". If I check: Assembly.GetExecutingAssembly().ImageRuntimeVersion I can see my runtime version is v4.0.21006 (so I know I have .NET 4.0 installed on my CPU). I naturally expect to see that my NETFrameworkVersion variable holds ".NET 4.0". It does not. It holds ".NET version unknown". So my question is, why is NET_4_0 not defined? Did the naming convention change? Is there some simple other way to determine .NET framework build version in versions 3.5?

    Read the article

  • Is there a way to get number of connections in Signalr hub group?

    - by pajo
    Here is my problem, I want to track if user is online or offline and notify other clients about it. I'm using hubs and implemented both IConnected and IDisconnect interfaces. My idea was to send notification to all clients when hub detects connect or disconnect. By default when user refreshes page he will get new connection id and eventually previous connection will call disconnect notifying other clients user is offline even though he's actually online. I tired to use my own ConnectionIdFactory returning username for connection id but with multiple tabs opened at some point it will detect user connectionid disconnected and after that client side hub will try to unsuccessfully connect to the hub in endless loop wasting memory and cpu making browser almost unusable. I needed to fix it fast so I removed my factory and now I add every new connection to the group using username, so I can easily notify single user on all connections, but then I have problem of detecting if user is online or offline as I don't know how many active connection user is having. So I'm wondering is there a way to get number of connections in one group? Or if anybody has some better idea how to track when user goes offline? I'm using Signalr 0.4

    Read the article

  • xsl:include fails after upgrading PHP version; is libxml/libxslt version mismatch the issue?

    - by Wiseguy
    I'm running Windows XP with the precompiled PHP binaries available from windows.php.net. I upgraded from PHP 5.2.5 to PHP 5.2.16, and now the xsl:includes in some of my stylesheets stopped working. Testing each version in succession, I discovered that it worked up through 5.2.8 and does not work in 5.2.9+. I now get the following three errors for each xsl:include. Warning: XSLTProcessor::importStylesheet() [xsltprocessor.importstylesheet]: I/O warning : failed to load external entity "file%3A/C%3A/path/to/included/stylesheet.xsl" in ... on line 227 Warning: XSLTProcessor::importStylesheet() [xsltprocessor.importstylesheet]: compilation error: file file%3A//C%3A/path/to/included/stylesheet.xsl line 36 element include in ... on line 227 Warning: XSLTProcessor::importStylesheet() [xsltprocessor.importstylesheet]: xsl:include : unable to load file%3A/C%3A/path/to/included/stylesheet.xsl in ... on line 227 I presume this is because it cannot find the specified file. Many of the includes are in the same directory as the stylesheet being transformed and have no directory in the path, i.e. <xsl:include href="fileInSameDir.xsl">. Interestingly, in the first and third errors, it is displaying the file:// protocol with only one slash instead of the correct two. I'm guessing that's the problem. (When I hard-code a full path using "file:/" it fails, but when I hard-code a full path with "file://" it works.) But what could cause that? A bug in libxslt/libxml? I also found an apparent version mismatch between libxml and the version of libxml that libxslt was compiled against. 5.2.5 libxml Version = 2.6.26 libxslt compiled against libxml Version = 2.6.26 5.2.8 libxml Version = 2.6.32 libxslt compiled against libxml Version = 2.6.32 === it breaks in versions starting at 5.2.9 === 5.2.9 libxml Version = 2.7.3 libxslt compiled against libxml Version = 2.6.32 5.2.16 libxml Version = 2.7.7 libxslt compiled against libxml Version = 2.6.32 Up until PHP 5.2.9, libxslt was compiled against the same version of libxml that was included with PHP. But starting with PHP 5.2.9, libxslt was compiled against an older version of libxml than that which was included with PHP. Is this a problem with the distributed binaries or just a coincidence? To test this, I imagine PHP could be built with different versions of libxml/libxslt to see which combinations work or don't. Unfortunately, I'm out of my element in a Windows world, and building PHP on Windows seems over my head. Regrettably, I've been thus far unable to reproduce this problem with an example outside my app, so I'm struggling to narrow it down and can't submit a specific bug. So, do you think it is caused by a version mismatch problem in the distributed binaries? a bug introduced in PHP 5.2.9? a bug introduced in libxml 2.7? something else? I'm stumped. Any thoughts that could point me in the right direction are greatly appreciated. Thanks.

    Read the article

  • Upload using python script takes very long on one laptop as compared to another

    - by Engr Am
    I have a python 2.7 code which uses STORBINARY function for uploading files to an ftp server and RETRBINARY for downloading from this server. However, the issue is the upload is taking a very long time on three laptops from different brands as compared to a Dell laptop. The strange part is when I manually upload any file, it takes the same time on all the systems. The manual upload rate and upload rate with the python script is the same on the Dell Laptop. However, on every other brand of laptop (I have tried with IBM, Toshiba, Fujitsu-Siemens) the python script has a very low upload rate than the manual attempt. Also, on all these other laptops, the upload rate using the python script is the same (1Mbit/s) while the manual upload rate is approx. 8 Mbit/s. I have tried to vary the filesize for the upload to no avail. TCP Optimizer improved the download rate on all the systems but had no effect on the upload rate. Download rate using this script on all the systems is fine and same as the manual download rate. I have checked the server and it has more than 90% free space. The network connection is the same for all the laptops, and I try uploading only with one laptop at a time. All the laptops have almost the same system configurations, same operating system and approximately the same free drive space. If anything the Dell laptop is a little less in terms of processing power and RAM than 2 of the others, but I suppose this has no effect as I have checked many times to see how much was the CPU usage and network usage during these uploads and downloads, and I am sure that no other virus or program has been eating up my bandwidth. Here is the code ('ftp' and 'file_path' are inputs to the function): path,filename=os.path.split(file_path) filesize=os.path.getsize(file_path) deffilesize=(filesize/1024)/1024 f = open(file_path, "rb") upstart = time.clock() print ftp.storbinary("STOR "+filename, f) upende = time.clock()-upstart outname="Upload " f.close() return upende, deffilesize, outname

    Read the article

  • Problems uploading different file types in codeigniter

    - by Drew
    Below is my script that i'm using to upload different files. All the solutions I've found deal only with multiple image uploads. I am totally stumped for a solution on this. Can someone tell me what it is i'm supposed to be doing to upload different files in the same form? Thanks function do_upload() { $config['upload_path'] = './uploads/nav'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '2000'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload('userfile')) { $error = array('error' => $this->upload->display_errors()); return $error; } else { $soundfig['upload_path'] = './uploads/nav'; $soundfig['allowed_types'] = 'mp3|wav'; $this->load->library('upload', $soundfig); if ( ! $this->upload->do_upload('soundfile')) { $error = array('error' => $this->upload->display_errors()); return $error; } else { $data = $this->upload->data('userfile'); $sound = $this->upload->data('soundfile'); $full_path = 'uploads/nav/' . $data['file_name']; $sound_path = 'uploads/nav/' . $sound['file_name']; if($this->input->post('active') == '1'){ $active = '1'; }else{ $active = '0'; } $spam = array( 'image_url' => $full_path, 'sound' => $sound_path, 'active' => $active, 'url' => $this->input->post('url') ); $id = $this->input->post('id'); $this->db->where('id', $id); $this->db->update('NavItemData', $spam); return true; } } } Here is my form: <?php echo form_open_multipart('upload/do_upload');?> <?php if(isset($buttons)) : foreach($buttons as $row) : ?> <h2><?php echo $row->name; ?></h2> <input type="file" name="userfile" size="20" /><br /> <input type="file" name="soundfile" size="20" /> <input type="hidden" name="oldfile" value="<?php echo $row->image_url; ?>" /> <input type="hidden" name="id" value="<?php echo $row->id; ?>" /> <br /><br /> <label>Url: </label><input type="text" name="url" value="<?php echo $row->url; ?>" /><br /> <input type="checkbox" name="active" value="1" <?php if($row->active == '1') { echo 'checked'; } ?> /><br /><br /> <input type="submit" value="submit" /> </form> <?php endforeach; ?> <?php endif; ?>

    Read the article

  • multi-core processing in R on windows XP - via doMC and foreach

    - by Jan
    Hi guys, I'm posting this question to ask for advice on how to optimize the use of multiple processors from R on a Windows XP machine. At the moment I'm creating 4 scripts (each script with e.g. for (i in 1:100) and (i in 101:200), etc) which I run in 4 different R sessions at the same time. This seems to use all the available cpu. I however would like to do this a bit more efficient. One solution could be to use the "doMC" and the "foreach" package but this is not possible in R on a Windows machine. e.g. library("foreach") library("strucchange") library("doMC") # would this be possible on a windows machine? registerDoMC(2) # for a computer with two cores (processors) ## Nile data with one breakpoint: the annual flows drop in 1898 ## because the first Ashwan dam was built data("Nile") plot(Nile) ## F statistics indicate one breakpoint fs.nile <- Fstats(Nile ~ 1) plot(fs.nile) breakpoints(fs.nile) # , hpc = "foreach" --> It would be great to test this. lines(breakpoints(fs.nile)) Any solutions or advice? Thanks, Jan

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Should a setter return immediately if assigned the same value?

    - by Andrei Rinea
    In classes that implement INotifyPropertyChanged I often see this pattern : public string FirstName { get { return _customer.FirstName; } set { if (value == _customer.FirstName) return; _customer.FirstName = value; base.OnPropertyChanged("FirstName"); } } Precisely the lines if (value == _customer.FirstName) return; are bothering me. I've often did this but I am not that sure it's needed nor good. After all if a caller assigns the very same value I don't want to reassign the field and, especially, notify my subscribers that the property has changed when, semantically it didn't. Except saving some CPU/RAM/etc by freeing the UI from updating something that will probably look the same on the screen/whatever_medium what do we obtain? Could some people force a refresh by reassigning the same value on a property (NOT THAT THIS WOULD BE A GOOD PRACTICE HOWEVER)? 1. Should we do it or shouldn't we? 2. Why?

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • implement SIMD in C++

    - by Hristo
    I'm working on a bit of code and I'm trying to optimize it as much as possible, basically get it running under a certain time limit. The following makes the call... static affinity_partitioner ap; parallel_for(blocked_range<size_t>(0, T), LoopBody(score), ap); ... and the following is what is executed. void operator()(const blocked_range<size_t> &r) const { int temp; int i; int j; size_t k; size_t begin = r.begin(); size_t end = r.end(); for(k = begin; k != end; ++k) { // for each trainee temp = 0; for(i = 0; i < N; ++i) { // for each sample int trr = trRating[k][i]; int ei = E[i]; for(j = 0; j < ei; ++j) { // for each expert temp += delta(i, trr, exRating[j][i]); } } myscore[k] = temp; } } I'm using Intel's TBB to optimize this. But I've also been reading about SIMD and SSE2 and things along that nature. So my question is, how do I store the variables (i,j,k) in registers so that they can be accessed faster by the CPU? I think the answer has to do with implementing SSE2 or some variation of it, but I have no idea how to do that. Any ideas? Thanks, Hristo

    Read the article

  • Dependency Properties, change notification and setting values in the constructor

    - by stefan.at.wpf
    Hello, I have a clas with 3 dependency properties A,B,C. The values of these properties are set by the constructor and every time one of the properties A, B or C changes, the method recalculate() is called. Now during execution of the constructor these method is called 3 times, because the 3 properties A, B, C are changed. Hoewever this isn't necessary as the method recalculate() can't do anything really useful without all 3 properties set. So what's the best way for property change notification but circumventing this change notification in the constructor? I thought about adding the property changed notification in the constructor, but then each object of the DPChangeSample class would always add more and more change notifications. Thanks for any hint! class DPChangeSample : DependencyObject { public static DependencyProperty AProperty = DependencyProperty.Register("A", typeof(int), typeof(DPChangeSample), new PropertyMetadata(propertyChanged)); public static DependencyProperty BProperty = DependencyProperty.Register("B", typeof(int), typeof(DPChangeSample), new PropertyMetadata(propertyChanged)); public static DependencyProperty CProperty = DependencyProperty.Register("C", typeof(int), typeof(DPChangeSample), new PropertyMetadata(propertyChanged)); private static void propertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { ((DPChangeSample)d).recalculate(); } private void recalculate() { // Using A, B, C do some cpu intensive caluclations } public DPChangeSample(int a, int b, int c) { SetValue(AProperty, a); SetValue(BProperty, b); SetValue(CProperty, c); } }

    Read the article

  • Strategy for animating a lot of "LED's" - thread?, UIView animations? NSOperation? (iPhone)

    - by RickiG
    Hi I have to do some different views containing 72 LED lights. I built an LED Class so I can loop through the LED's and set them to different colors (Green, Red, Orange, Blue None etc.). The LED then loads the appropriate .png. This works fine, I loop over the LED's and set them. Now I know that at some time they will need to not just turn on/off change color, but will have to turn on with a small delay. Like an equalizer. I have a 5-10 views containing the 72 LED's and I would like to achieve the above with the minimum amount of memory/CPU strain. for(LED *l in self.ledArray) { [l display:Green]; } I simply loop as shown above and inside the LED is a switch case that does the correct logic. If this were actual LED's and a microController I would use sleep(100) or similar in the loop, but I would really like to avoid stuff like that for obvious reasons. I was thinking that doing a performOnThread withDelay would really be consuming, so would UIView animation changing the alpha and NSOperation would also be a lot of lifting for a small feature. Is there a both efficient and clever way to go around this? Thanks for any inspiration given:)

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Java iteration reading & parsing

    - by Patrick Lorio
    I have a log file that I am reading to a string public static String Read (String path) throws IOException { StringBuilder sb = new StringBuilder(); InputStream in = new BufferedInputStream(new FileInputStream(path)); int r; while ((r = in.read()) != -1) { sb.append(r); } return sb.toString(); } Then I have a parser that iterates over the entire string once void Parse () { String con = Read("log.txt"); for (int i = 0; i < con.length; i++) { /* parsing action */ } } This is hugely a waste of cpu cycles. I loop over all the content in Read. Then I loop over all the content in Parse. I could just place the /* parsing action */ under the while loop in the Read method, which would be find but I don't want to copy the same code all over the place. How can I parse the file in one iteration over the contents and still have separate methods for parsing and reading? In C# I understand there is some sort of yield return thing, but I'm locked with Java. What are my options in Java?

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • JAVASCRIPT ENABLED [closed]

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • Organising XML results as cells in container (AS3)

    - by PJ Palomaki
    Hi, I'm having some problems figuring out how to organise data pulled off XML in cells within a container. I'm sure this should be a basic thing in AS3, but my head's fried.. can anyone help? Basically an array if fed to callThumbs() which iterates through it and compares the entries with preloaded XML _my_images. If match is found, it's sent to processXML which loads all relevant info and loads a .jpg thumbnail. All this is then fed to createCell which creates a specific cell with position values depending on x_counter and y_counter values (4 cells in a row) and adds the cell into a container _container_mc. The Problem: This all works fine and looks fine, the problem is that the cells within the container do not display in descending order. They are in random order, probably because some of the .jpg's takes longer to load etc. How do I easily organise the cells within the container in descending order by the XML .id value? Or how do I tell Flash to wait till the thumbnail and data is loaded and the cell created and added? Thanks guys, would really appreciate all the help! PJ //Flash (AS3) function callThumbs(_my_results:Array):void { // selector = 1 for specific items, 2 for search items var _thumb_url:XML; for (var r:Number=0; r < _my_results.length; r++) { // iterate through results vector, compare with _my_images XML .id for (var i:Number=0; i < _my_images.length(); i++) { if (_my_images[i][email protected]() == _my_results[r]) { _thumb_url=_my_images[i]; processXML(_thumb_url, i); } } } } // End callThumbs function processXML(imageXML:XML, num:Number) { // Processes XML data and loads .jpg thumbnail var _thumb_loader=new Loader(); _thumb_loader.load(new URLRequest("thumbs/thumb_sm/" + imageXML.@id + "_st.jpg")); _thumb_loader.contentLoaderInfo.addEventListener(Event.COMPLETE,thumbLoaded); _thumb_loader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR, urlNotFound); var id:XMLList = new XMLList; id = imageXML.@id; var description:XMLList = new XMLList; description = imageXML.@description; function urlNotFound(event:IOErrorEvent):void { trace("The image URL '" + String(imageXML.@id) + "' was not found."); } function thumbLoaded(e:Event):void { var imageLoader:Loader = Loader(e.target.loader); var bm:Bitmap = Bitmap(imageLoader.content); createCell(bm, id, description, num); adjustFooterBar(); // Adjust bottom footer } } // End processXML private function createCell(_image:Bitmap, _id, _description:String, _position):void { // Creates a cell with data, add to container var _cell_mc = new CellTitle(); _cell_mc.initCell(_image, _id, _description, _position, x_counter, y_counter); if (x_counter+1 < 4) { x_counter++; } else { x_counter = 0; y_counter++; } _container_mc.addChild(_cell_mc); // movieclip container } // End createCell

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • Google Chrome Frame and Facebook Javascript SDK - Cannot login

    - by Giannis Savvakis
    On the example below i have an html page with the javascript code needed to login to facebook. On the i have the Google Chrome Frame meta tag that makes the page run with google chrome frame. If you open this page with any browser the finish() callback runs normally. If you open it with Google Chrome Frame it never fires. So this means that every Facebook App that tries to login to gather user data cannot login. This happens if the page is opened with google frame. But even if i remove the meta tag so that the page can open with IE8 the page opens again with google chrome frame because Facebook opens google chrome frame by default. So because this is a Facebook app that runs inside an inside facebook.com it is forced to open with Google Chrome Frame! SERIOUS BUG! I have seen other people reporting it, someone has made a test facebook app also here: http://apps.facebook.com/gcftest/ appID and channelUrl are dummy in the example below. <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <meta name="generator" content= "HTML Tidy for Linux/x86 (vers 11 February 2007), see www.w3.org" /> <meta charset="utf-8" /> <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" /> <meta http-equiv="Pragma" content="no-cache" /> <meta http-equiv="Expires" content="0" /> <meta http-equiv="X-UA-Compatible" content="IE=Edge,chrome=IE8" /> <title>Facebook Login</title> <script type="text/javascript"> //<![CDATA[ // Load the SDK Asynchronously (function(d){ var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) { return; } js = d.createElement('script'); js.id = id; js.async = true; js.src = "//connect.facebook.net/en_US/all.js"; ref.parentNode.insertBefore(js, ref); }(document)); var appID = '0000000000000'; var channelUrl = '//myhost/channel.html'; // Init the SDK upon load window.fbAsyncInit = function() { FB.init({ appId : appID, // App ID channelUrl : channelUrl, status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); FB.Event.subscribe('auth.statusChange', function(response) { if(!response.authResponse) FB.login(finish, {scope: 'publish_actions,publish_stream'}); else finish(response); }); FB.getLoginStatus(finish); } function finish(response) { alert("Hello "+response.name); } //]]> </script> </head> <body> <h1>Facebook login</h1> <p>Do NOT close this window.</p> <p>please wait...</p> </body> </html>

    Read the article

  • Call method immediately after object construction in LINQ query

    - by Steffen
    I've got some objects which implement this interface: public interface IRow { void Fill(DataRow dr); } Usually when I select something out of db, I go: public IEnumerable<IRow> SelectSomeRows { DataTable table = GetTableFromDatabase(); foreach (DataRow dr in table.Rows) { IRow row = new MySQLRow(); // Disregard the MySQLRow type, it's not important row.Fill(dr); yield return row; } } Now with .Net 4, I'd like to use AsParallel, and thus LINQ. I've done some testing on it, and it speeds up things alot (IRow.Fill uses Reflection, so it's hard on the CPU) Anyway my problem is, how do I go about creating a LINQ query, which calls Fills as part of the query, so it's properly parallelized? For testing performance I created a constructor which took the DataRow as argument, however I'd really love to avoid this if somehow possible. With the constructor in place, it's obviously simple enough: public IEnumerable<IRow> SelectSomeRowsParallel { DataTable table = GetTableFromDatabase(); return from DataRow dr in table.Rows.AsParallel() select new MySQLRow(dr); } However like I said, I'd really love to be able to just stuff my Fill method into the LINQ query, and thus not need the constructor overload.

    Read the article

  • Dynamic stack allocation in C++

    - by Poni
    I want to allocate memory on the stack. Heard of _alloca / alloca and I understand that these are compiler-specific stuff, which I don't like. So, I came-up with my own solution (which might have it's own flaws) and I want you to review/improve it so for once and for all we'll have this code working: /*#define allocate_on_stack(pointer, size) \ __asm \ { \ mov [pointer], esp; \ sub esp, [size]; \ }*/ /*#define deallocate_from_stack(size) \ __asm \ { \ add esp, [size]; \ }*/ void test() { int buff_size = 4 * 2; char *buff = 0; __asm { // allocate mov [buff], esp; sub esp, [buff_size]; } // playing with the stack-allocated memory for(int i = 0; i < buff_size; i++) buff[i] = 0x11; __asm { // deallocate add esp, [buff_size]; } } void main() { __asm int 3h; test(); } Compiled with VC9. What flaws do you see in it? Me for example, not sure that subtracting from ESP is the solution for "any kind of CPU". Also, I'd like to make the commented-out macros work but for some reason I can't.

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >