Search Results

Search found 6363 results on 255 pages for 'buford speed'.

Page 220/255 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Parallel.For Batching

    - by chibacity
    Is there built-in support in the TPL for batching operations? I was recently playing with a routine to carry out character replacement on a character array which required a lookup table i.e. transliteration: for (int i = 0; i < chars.Length; i++) { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } } I could see that this could be trivially parallelized, so jumped in with a first stab which I knew would perform worse as the tasks were too fine-grained: Parallel.For(0, chars.Length, i => { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } }); I then reworked the algorithm to use batching so that the work could be chunked onto different threads in less fine-grained batches. This made use of threads as expected and I got some near linear speed up. I'm sure that there must be built-in support for batching in the TPL. What is the syntax, and how do I use it? const int CharBatch = 100; int charLen = chars.Length; Parallel.For(0, ((charLen / CharBatch) + 1), i => { int batchUpper = ((i + 1) * CharBatch); for (int j = i * CharBatch; j < batchUpper && j < charLen; j++) { char replaceChar; if (lookup.TryGetValue(chars[j], out replaceChar)) { chars[j] = replaceChar; } } });

    Read the article

  • Mercurial CLI is slow in C#?

    - by pATCheS
    I'm writing a utility in C# that will make managing multiple Mercurial repositories easier for the way my team is using it. However, it seems that there is always about a 300 to 400 millisecond delay before I get anything back from hg.exe. I'm using the code below to run hg.exe and hgtk.exe (TortoiseHg's GUI). The code currently includes a Stopwatch and some variables for timing purposes. The delay is roughly the same on multiple runs within the same session. I have also tried specifying the exact path of hg.exe, and got the same result. static string RunCommand(string executable, string path, string arguments) { var psi = new ProcessStartInfo() { FileName = executable, Arguments = arguments, WorkingDirectory = path, UseShellExecute = false, RedirectStandardError = true, RedirectStandardInput = true, RedirectStandardOutput = true, WindowStyle = ProcessWindowStyle.Maximized, CreateNoWindow = true }; var sbOut = new StringBuilder(); var sbErr = new StringBuilder(); var sw = new Stopwatch(); sw.Start(); var process = Process.Start(psi); TimeSpan firstRead = TimeSpan.Zero; process.OutputDataReceived += (s, e) => { if (firstRead == TimeSpan.Zero) { firstRead = sw.Elapsed; } sbOut.Append(e.Data); }; process.ErrorDataReceived += (s, e) => sbErr.Append(e.Data); process.BeginOutputReadLine(); process.BeginErrorReadLine(); var eventsStarted = sw.Elapsed; process.WaitForExit(); var processExited = sw.Elapsed; sw.Reset(); if (process.ExitCode != 0 || sbErr.Length > 0) { Error.Mercurial(process.ExitCode, sbOut.ToString(), sbErr.ToString()); } return sbOut.ToString(); } Any ideas on how I can speed things up? As it is, I'm going to have to do a lot of caching in addition to threading to keep the UI snappy.

    Read the article

  • increase number of photos from flickr using json

    - by Andrew Welch
    Hi this is my code: Is is possible to get more photos from flickr. What is the standard / default number? $(document).ready(function(){ $.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?id=48719970@N07&lang=en-us&format=json&jsoncallback=?", function(data){ $.each(data.items, function(i, item){ var newurl = 'url(' + item.media.m + ')'; $("<div class='images'/>").css('background', newurl).css('backgroundPosition','top center').css('backgroundRepeat','no-repeat').appendTo("#images").wrap("<a target=\"_blank\ href='" + item.link + "'></a>"); }) $("#title").html(data.title); $("#description").html(data.description); $("#link").html("<a href='" + data.link + "' target=\"_blank\">Visit the Viget Inspiration Pool!</a>"); //Notice that the object here is "data" because that information sits outside of "items" in the JSON feed $('.jcycleimagecarousel').cycle({ fx: 'fade', speed: 300, timeout: 3000, next: '#next', prev: '#prev', pause: 1, random: 1 }); }); });

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • What would you write in a Constitution (law book) for a programmers' country? [closed]

    - by Developer Art
    After we have got our great place to talk about professional matters and sozialize online - on SO, I believe the next logical step would be to found our own country! I invite you all to participate and bring together your available resources. We will buy an island, better even a group of islands so that we could establish states like .NET territories, Java land, Linux republic etc. We will build a society of programmers (girls - we need you too). To the organizational side, we're going to need some sort of a Constitution or a law book. I suggest we write it together. I make it a wiki as it should be a cooperative effort. I'll open the work. Section 1. Programmers' rights. Every citizen has a right to an Internet connection 24/7. Every citizen can freely choose the field of interest Section 2. Programmers' obligations. Every citizen must embrace the changing nature of the profession and constanly educate himself Section 3. Law enforcement. Code duplication when can be avoided is punished by limiting the bandwith speed to 64Kbit for a period of one week. Using ugly hacks instead of refactoring code is punished by cutting the Internet connection for a period of one month. Usage of technologies older than 5/10 years is punished by restricting the web access to the sites last updated 5/10 years ago for a period of one month. Please feel free to modify and extend the list. We'll need to have it ready before we proceed formally with the country foundation. A purchase fund will be established shortly. Everyone is invited to participate.

    Read the article

  • java url connection, wait for data being sent through the outputstream

    - by Mateu
    I'm writting a java class that tests uploading speed connection to a server. I want to check how many data can be send in 5 seconds. I've written a class which creates a URL, creates a connection, and sends data trough the outPutStream. There is a loop where I writte data to the stream for 5 seconds. However I'm not able to see when data has been send (I writte data to the output stream, but data is not send yet). How can I wait untill data is really sent to the server? Here goes my code (which does not work): URL u = new URL(url) HttpURLConnection uc = (HttpURLConnection) u.openConnection(); uc.setDoOutput(true); uc.setDoInput(true); uc.setUseCaches(false); uc.setDefaultUseCaches(false); uc.setRequestMethod("POST"); uc.setRequestProperty("Content-Type", "application/octet-stream"); uc.connect(); st.start(); // Send the request OutputStream os = uc.getOutputStream(); //This while is incorrect cause it does not wait for data beeing sent while (st.getElapsedTime() < miliSeconds) { os.write(buffer); os.flush(); st.addSize(buffer.length); } os.close(); Thanks

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • How can I embed images within my application and use them in HTML control?

    - by Atara
    Is there any way I can embed the images within my exe (as resource?) and use it in generated HTML ? Here are the requirements: A. I want to show dynamic HTML content (e.g. using webBrowser control, VS 2008, VB .Net, winForm desktop application) B. I want to generate the HTML on-the-fly using XML and XSL (file1.xml or file2.xml transformed by my.xsl) C. The HTML may contain IMG tags (file1.gif and or file2.gif according to the xml+xsl transformation) and here comes the complicated one: D. All these files (file1.xml, file2.xml, my.xsl, file1.gif, file2.gif) should be embedded in one exe file. I guess the XML and XSL can be embedded resources, and I can read them as stream, but what ways do I have to reference the image within the HTML ? <IMG src="???" /> I do not want to use absolute path and external files. If the image files are resources, can I use relative path? Relative to what? (I can use BASE tag, and then what?) Can I use stream as in email messages? If so, where can I find the format I need to use? http://www.websiteoptimization.com/speed/tweak/inline-images/ are browser dependent. What is the browser used by webBrowser control? IE? what version? Does it matter if I use GIF or JPG or BMP (or any other image format) for the images? Does it matter if I use mshtml library and not the regular webBrowser control? (currently I use http://www.itwriting.com/htmleditor/index.php ) Does it matter if I upgrade to VS 2010 ? Thanks, Atara

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Jquery cycle plugin and image tag from code behind

    - by Geetha
    Hi All, I am using cycle plugin to show images. I am binding the html code for image from code behind. Problem: Images are not getting displayed. If i hard coded the image tag it is working. Code: $(window).load(function() { $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", data: "{}", url: "Default.aspx/GetImage", dataType: "json", success: function(data) { alert(data.d); $("#sample").html(data.d); $('#sample').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, pause: 1, sync: 1 }); } }); }); Data.d will give: <img src="Images/Frontbanner/s0.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s111.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s112.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s113.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s114.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s115.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s116.jpg" width="664" height="428" border="0" />

    Read the article

  • getting data from dynamic schema

    - by coure2011
    I am using mongoose/nodejs to get data as json from mongodb. For using mongoose I need to define schema first like this var mongoose = require('mongoose'); var Schema = mongoose.Schema; var GPSDataSchema = new Schema({ createdAt: { type: Date, default: Date.now } ,speed: {type: String, trim: true} ,battery: { type: String, trim: true } }); var GPSData = mongoose.model('GPSData', GPSDataSchema); mongoose.connect('mongodb://localhost/gpsdatabase'); var db = mongoose.connection; db.on('open', function() { console.log('DB Started'); }); then in code I can get data from db like GPSData.find({"createdAt" : { $gte : dateStr, $lte: nextDate }}, function(err, data) { res.writeHead(200, { "Content-Type": "application/json", "Access-Control-Allow-Origin": "*" }); var body = JSON.stringify(data); res.end(body); }); How to define scheme for a complex data like this, you can see that subSection can go to any deeper level. [ { 'title': 'Some Title', 'subSection': [{ 'title': 'Inner1', 'subSection': [ {'titile': 'test', 'url': 'ab/cd'} ] }] }, .. ]

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Animation management in COCOS2D iphone.

    - by shreya
    Hi All, I have near about 255 image frames for background animation, 99 frames of enemy sprite and 125 frames of player sprite. All animations are running simultaneously on the screen. That is background animation is running and 4-5 enemies are on the screen are present at a time, also player is there at the same time. Take a look at the code below, CCAnimation *_enemyAnimation = [CCAnimation animationWithName:@"Enemy" delay:0.1f]; for (int i = 1; i<99; i++) { [_enemyAnimation addFrameWithFilename:[NSString stringWithFormat:@"enemy %02d.jpg",i]]; } id action1 = [CCAnimate actionWithAnimation: _enemyAnimation]; [_enemySprite runAction:[CCRepeatForever actionWithAction: action1]]; [self schedule:@selector(BackToGameLogic:) interval:5.0]; This makes my game too slower and consumes memory about 65MB in the allocations. How should I manage my animations so there will be improvement in speed and memory consumption will be reduced?. Please suggest me the way. Thanks.

    Read the article

  • Android: Speeding up display of (html-formatted) text

    - by prepbgg
    My app uses a StringBuilder to assemble paragraphs of text which are then displayed in a TextView within a ScrollView. The displaytext.xml layout file is: <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:id="@+id/LinearLayout01" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#FFFFFF" xmlns:android="http://schemas.android.com/apk/res/android"> <ScrollView android:id="@+id/ScrollView01" android:layout_width="fill_parent" android:layout_height="wrap_content" > <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/display_text" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="#000000" > </TextView> </ScrollView> </LinearLayout> and the code that displays the StringBuilder object sbText is setContentView(R.layout.displaytext); TextView tv = (TextView)findViewById(R.id.display_text); tv.setText(Html.fromHtml(sbText.toString())); This works OK, except that it gets very slow as the amount of text grows. For example, to display 50 paragraphs totalling about 50KB of text takes over 5 seconds just to execute those three lines of code. Can anyone suggest how I can speed this up, please?

    Read the article

  • Deleting While Iterating in Ruby?

    - by Jesse J
    I'm iterating over a very large set of strings, which iterates over a smaller set of strings. Due to the size, this method takes a while to do, so to speed it up, I'm trying to delete one of the strings from the smaller set that no longer needs to be used. Below is my current code: Ms::Fasta.foreach(@database) do |entry| all.each do |set| if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete(set) end end end end The problem I face is that the easy way, array.delete(string), effectively adds a break statement to the inner loop, which messes up the results. The only way I know how to fix this is to do this: Ms::Fasta.foreach(@database) do |entry| i = 0 while i < all.length set = all[i] if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete_at(i) i -= 1 end end i += 1 end end This feels kind of sloppy to me. Is there a better way to do this? Thanks.

    Read the article

  • How can I accelerate the generation of the an MD5 Checksum within vb.net?

    - by Richard
    I'm working with some very large files residing on P2 (Panasonic) cards. Part of the process we employ is to first generate a checksum of the file we are going to copy, then copy the file, then run a checksum on the file to confirm that it copied OK. The problem is, is that files are large (70 GB+) and take a long time to complete. It's an issue since we will eventually be dealing with thousands of these files. I would like to find a faster way to generate the checksum other than using the System.Security.Cryptography.MD5CryptoServiceProvider I don't care if this means using a specialized hardware card, provided it works and is not to ungodly expensive. I would prefer to have a method of encoding that provided some feedback as to how far the process has gone along so I can display it like I do now. The application is written in vb.net. I would prefer to be able to use it as component, library, reference within my application, but I'm willing to call an outside application if there is enough improvement in the speed of generating the checksum. Needless to say, the checksum must be consistent and correct. :-) Thank you in advance for your time and efforts, Richard

    Read the article

  • setup Qt and PyQt on mac osx so my app can also deployable on windows

    - by hk_programmer
    Hi, I've been coding with Python and C++ and now need to work on building a gui for data visualization purposes. I work on Mac Snow Leopard (intel), python 3.1 using gcc 4.2.1 (from Xcode 3.1) I wanted to first install Qt and then PyQt. And my goals are to be able to: - quickly prototype GUI and the accompanied logic that drives the GUI using PyQt and python - if I decided I need the speed, or if it's fairly easy to translate my GUI into C++ using the Qt tools, I have the options to translate my app into C++ - Be able to deploy my application onto Windows (both the python and c++ version of my app) Give the goals above, what are the correct steps I should take and what issues i should be aware of when setting up Qt and PyQt. Which other deployment tools do I need? From my readings so far, here's what I have: download the Qt source for mac and configure it with -platform macx-g++42 -arch x86_64 -no-framework (i've read somewhere that building as framework causes some trouble in deployment and/or debugging, can't find the article anymore) download latest SIP source and build download latest PyQt and build from source (any special options I should pay attention to?) For deployment, I've read that I would need to use py2exe/cx_freeze for windows, p2app for mac: http://arstechnica.com/open-source/guides/2009/03/how-to-deploying-pyqt-applications-on-windows-and-mac-os-x.ars but seems like what the article describe is deploying an app you build on windows on the windows platform and vice versa. How do you deploy to windows (is it even possible?) if you are writing your Qt app on a mac ? Really appreciate the help

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

  • How to improve Visual C++ compilation times?

    - by dtrosset
    I am compiling 2 C++ projects in a buildbot, on each commit. Both are around 1000 files, one is 100 kloc, the other 170 kloc. Compilation times are very different from gcc (4.4) to Visual C++ (2008). Visual C++ compilations for one project take in the 20 minutes. They cannot take advantage of the multiple cores because a project depend on the other. In the end, a full compilation of both projects in Debug and Release, in 32 and 64 bits takes more than 2 1/2 hours. gcc compilations for one project take in the 4 minutes. It can be parallelized on the 4 cores and takes around 1 min 10 secs. All 8 builds for 4 versions (Debug/Release, 32/64 bits) of the 2 projects are compiled in less than 10 minutes. What is happening with Visual C++ compilation times? They are basically 5 times slower. What is the average time that can be expected to compile a C++ kloc? Mine are 7 s/kloc with vc++ and 1.4 s/kloc with gcc. Can anything be done to speed-up compilation times on Visual C++?

    Read the article

  • gl_FragColor and glReadPixels

    - by chun0216
    I am still trying to read pixels from fragment shader and I have some questions. I know that gl_FragColor returns with vec4 meaning RGBA, 4 channels. After that, I am using glReadPixels to read FBO and write it in data GLubyte *pixels = new GLubyte[640*480*4]; glReadPixels(0, 0, 640,480, GL_RGBA, GL_UNSIGNED_BYTE, pixels); This works fine but it really has speed issue. Instead of this, I want to just read RGB so ignore alpha channels. I tried: GLubyte *pixels = new GLubyte[640*480*3]; glReadPixels(0, 0, 640,480, GL_RGB, GL_UNSIGNED_BYTE, pixels); instead and this didn't work though. I guess it's because gl_FragColor returns 4 channels and maybe I should do something before this? Actually, since my returned image (gl_FragColor) is grayscale, I did something like float gray = 0.5 //or some other values gl_FragColor = vec4(gray,gray,gray,1.0); So is there any efficient way to use glReadPixels instead of using the first 4 channels method? Any suggestion? By the way, this is on opengl es 2.0 code.

    Read the article

  • Concept: Is mongo right for applying schemas?

    - by Jan
    I am currently in charge of checking wether it is valuable for one of our upcoming products to be developed on mongo. Without going too much into detail, I'll try to explain, what the app does. The app simply has "entities". These entities are technical stuff, like cell phones, TVs, Laptops, tablet pcs, and so forth. Of course, a cell phone has other attributes than a Tablet PCs and a Laptop has even other attributes, like RAM, CPU, display size and so on. Now I want to have something that we wanna call a scheme: We define that we need to have saved the display size, amount of ram size of flash devices, processor type, processor speed and so on for tablet pcs. For cell phone we might save display size, GSM, Edge, 3g, 4g, processor, ram, touch screen technology, bla bla bla. I think you got it :) What I want to realize is, that each "category" has a schema and when one of the system's users enters a new product (let's say the new iphone 4), the app constructs the form to be filled out with the appropriate attributes. So far it sounds nice and should not be a problem with mongo. But now the tough for which I could not find a clean solution.... An attribute modeled in mongo looks like: { _id: 1234456, name: "Attribute name", type: 0, "description" } But what to do, if i need this attribute in several languages, like: { en: {name: "Attribute name", type: 0, "description"}, de: {name: "Name des Attributs, type: 0, "Beschreibung"} } I also need to ensure that the german attribute gets updated as soon as the english gets updated, for instance when type changes from 0 to 1. Any ideas on that?

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >