Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 765/844 | < Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

  • Zend dubugger in eclipse - timeout everytime

    - by jax
    I am running a virtual server in the US. I am trying to get my eclipse machine at home (outside the USA), to connect to the USA server. I have setup Zend on the server. When I run phpinfo() I get the following zend output. Note: 1.2.3.4 will be the external WAN IP address of my ADSL router at home. Directive Local Value Master Value zend_debugger.allow_hosts 127.0.0.1,1.2.3.4 127.0.0.1,1.2.3.4 zend_debugger.allow_tunnel no value no value zend_debugger.deny_hosts no value no value zend_debugger.expose_remotely always always zend_debugger.httpd_uid -1 -1 zend_debugger.max_msg_size 2097152 2097152 zend_debugger.tunnel_max_port 65535 65535 zend_debugger.tunnel_min_port 1024 1024 So zend looks like it is working ok on the server side. When I run a debug session and select 'Test Debugger' I get a timeout every time. I have already added dummy.php to the root folder of the server. In 'installed debuggers' I double clicked on Zend and have put my external WAN IP address. I noticed that the port is 10000, I also have webmin running on this port on the server, will there be a conflict?

    Read the article

  • Android SDK: Hello World won't run !

    - by RedRoses
    Hello, This is a very very beginner question regarding Android Development I am trying to create and run the Hello world example from the Android SDK website, but I can't see anything appearing on the screen. It appears to me that Eclipse just hangs at this point: [2010-11-05 09:55:47 - HelloAndroid] ------------------------------ [2010-11-05 09:55:47 - HelloAndroid] Android Launch! [2010-11-05 09:55:47 - HelloAndroid] adb is running normally. [2010-11-05 09:55:47 - HelloAndroid] Performing com.example.helloandroid.HelloAndroid activity launch [2010-11-05 09:55:48 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'AVD2' [2010-11-05 09:55:48 - HelloAndroid] Launching a new emulator with Virtual Device 'AVD2' [2010-11-05 09:55:51 - HelloAndroid] New emulator found: emulator-5554 [2010-11-05 09:55:51 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2010-11-05 09:57:10 - HelloAndroid] WARNING: Application does not specify an API level requirement! [2010-11-05 09:57:10 - HelloAndroid] Device API version is 8 (Android 2.2) [2010-11-05 09:57:10 - HelloAndroid] HOME is up on device 'emulator-5554' [2010-11-05 09:57:10 - HelloAndroid] Uploading HelloAndroid.apk onto device 'emulator-5554' [2010-11-05 09:57:12 - HelloAndroid] Installing HelloAndroid.apk... [2010-11-05 09:59:33 - HelloAndroid] Success! [2010-11-05 09:59:33 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device emulator-5554 So it says "Starting activity ..." but nothing ever starts and it's already been more than 30 mins. What could be wrong?? Thanks !

    Read the article

  • Optimized 2D Tile Scrolling in OpenGL

    - by silicus
    Hello, I'm developing a 2D sidescrolling game and I need to optimize my tiling code to get a better frame rate. As of right now I'm using a texture atlas and 16x16 tiles for 480x320 screen resolution. The level scrolls in both directions, and is significantly larger than 1 screen (thousands of pixels). I use glTranslate for the actual scrolling. So far I've tried: Drawing only the on-screen tiles using glTriangles, 2 per square tile (too much overhead) Drawing the entire map as a Display List (great on a small level, way to slow on a large one) Partitioning the map into Display Lists half the size of the screen, then culling display lists (still slows down for 2-directional scrolling, overdraw is not efficient) Any advice is appreciated, but in particular I'm wondering: I've seen Vertex Arrays/VBOs suggested for this because they're dynamic. What's the best way to take advantage of this? If I simply keep 1 screen of vertices plus a bit of overdraw, I'd have to recopy the array every few frames to account for the change in relative coordinates (shift everything over and add the new rows/columns). If I use more overdraw this doesn't seem like a big win; it's like the half-screen display list idea. Does glScissor give any gain if used on a bunch of small tiles like this, be it a display list or a vertex array/VBO Would it be better just to build the level out of large textures and then use glScissor? Would losing the memory saving of tiling be an issue for mobile development if I do this (just curious, I'm currently on a PC)? This approach was mentioned here Thanks :)

    Read the article

  • How to load/save C++ class instance (using STL containers) to disk

    - by supert
    I have a C++ class representing a hierarchically organised data tree which is very large (~Gb, basically as large as I can get away with in memory). It uses an STL list to store information at each node plus iterators to other nodes. Each node has only one parent, but 0-10 children. Abstracted, it looks something like: struct node { public: node_list_iterator parent; // iterator to a single parent node double node_data_array[X]; map<int,node_list_iterator> children; // iterators to child nodes }; class strategy { private: list<node> tree; // hierarchically linked list of nodes struct some_other_data; public: void build(); // build the tree void save(); // save the tree from disk void load(); // load the tree from disk void use(); // use the tree }; I would like to implement the load() and save() to disk, and it should be fairly fast, however the obvious problems are: I don't know the size in advance; The data contains iterators, which are volatile; My ignorance of C++ is prodigious. Could anyone suggest a pure C++ solution please?

    Read the article

  • What is the most efficient way to handle points / small vectors in JavaScript?

    - by Chris
    Currently I'm creating an web based (= JavaScript) application thata is using a lot of "points" (= small, fixed size vectors). There are basically two obvious ways of representing them: var pointA = [ xValue, yValue ]; and var pointB = { x: xValue, y: yValue }; So translating my point a bit would look like: var pointAtrans = [ pointA[0] + 3, pointA[1] + 4 ]; var pointBtrans = { x: pointB.x + 3, pointB.y + 4 }; Both are easy to handle from a programmer point of view (the object variant is a bit more readable, especially as I'm mostly dealing with 2D data, seldom with 3D and hardly with 4D - but never more. It'll allways fit into x,y,z and w) But my question is now: What is the most efficient way from the language perspective - theoretically and in real implementations? What are the memory requirements? What are the setup costs of an array vs. an object? ... My target browsers are FireFox and the Webkit based ones (Chromium, Safari), but it wouldn't hurt to have a great (= fast) experience under IE and Opera as well...

    Read the article

  • Extra NotifyIcon shown in system tray

    - by Kettch19
    I'm having an issue with an app where my NotifyIcon displays an extra icon. The steps to reproduce it are easy, but the problem is that the extra icon shows up after any of the actual codebehind we've added fires. Put simply, clicking a button triggers execution of method FooBar() which runs all the way through fine but its primary duty is to fire a backgroundworker to log into another of our apps. It only appears if this particular button is clicked. Strangely enough, we have a WndProc method override and if I step through until the extra NotifyIcon appears, it always appears during this method so something else beyond the codebehind must be triggering the behavior. Our WndProc method is currently (although I don't think it's caused by the WndProc): Protected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message) 'Check for WM_COPYDATA message from other app or drag/drop action and handle message If m.Msg = NativeMethods.WM_COPYDATA Then ' get the standard message structure from lparam Dim CD As NativeMethods.COPYDATASTRUCT = m.GetLParam(GetType(NativeMethods.COPYDATASTRUCT)) 'setup byte array Dim B(CD.cbData) As Byte 'copy data from memory into array Runtime.InteropServices.Marshal.Copy(New IntPtr(CD.lpData), B, 0, CD.cbData) 'Get message as string and process ProcessWMCopyData(System.Text.Encoding.Default.GetString(B)) 'empty array Erase B 'set message result to 'true', meaning message handled m.Result = New IntPtr(1) End If 'pass on result and all messages not handled by this app MyBase.WndProc(m) End Sub The only place in the code where the NotifyIcon in question is manipulated at all is in the following event handler (again, don't think this is the culprit, but just for more info): Private Sub TrayIcon_MouseDoubleClick(ByVal sender As System.Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles TrayIcon.MouseDoubleClick If Me.Visible Then Me.Hide() Else PositionBottomRight() Me.Show() End If End Sub The backgroundworker's DoWork is as follows (just a class call to log in to our other app, but again just for info): Private Sub LoginBackgroundWorker_DoWork(ByVal sender As Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles LoginBackgroundWorker.DoWork Settings.IsLoggedIn = _wdService.LogOn(Settings.UserName, Settings.Password) End Sub Does anyone else have ideas on what might be causing this or how to possibly further debug this? I've been banging my head on this without seeing a pattern so another set of eyes would be extremely appreciated. :) I've posted this on MSDN winforms forums as well and have had no luck there so far either.

    Read the article

  • localhost yes but phpmyadmin blank

    - by Giskin Leow
    WAMP people having problem with both localhost and phpmyadmin loads blank which usually the port problem. Mine is only phpmyadmin blank. sqlbuddy and phpinfo no problem. tried uninstall reinstalled wamp. tried xampp, same problem, all works well, not phpmyadmin. mysql log: 120905 8:03:08 [Note] Plugin 'FEDERATED' is disabled. 120905 8:03:08 InnoDB: The InnoDB memory heap is disabled 120905 8:03:08 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120905 8:03:08 InnoDB: Compressed tables use zlib 1.2.3 120905 8:03:09 InnoDB: Initializing buffer pool, size = 128.0M 120905 8:03:09 InnoDB: Completed initialization of buffer pool 120905 8:03:09 InnoDB: highest supported file format is Barracuda. 120905 8:03:09 InnoDB: Waiting for the background threads to start 120905 8:03:10 InnoDB: 1.1.8 started; log sequence number 1595675 120905 8:03:11 [Note] Server hostname (bind-address): '(null)'; port: 3306 120905 8:03:11 [Note] - '(null)' resolves to '::'; 120905 8:03:11 [Note] - '(null)' resolves to '0.0.0.0'; 120905 8:03:11 [Note] Server socket created on IP: '0.0.0.0'. 120905 8:03:13 [Note] Event Scheduler: Loaded 0 events 120905 8:03:13 [Note] wampmysqld: ready for connections. apache log [Wed Sep 05 08:03:09 2012] [notice] Apache/2.2.22 (Win32) PHP/5.4.3 configured -- resuming normal operations [Wed Sep 05 08:03:09 2012] [notice] Server built: May 13 2012 13:32:42 [Wed Sep 05 08:03:09 2012] [notice] Parent: Created child process 3812 [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Child process is running [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Acquired the start mutex. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting 64 worker threads. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:04:14 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:09:50 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:41:03 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/phpMyAdmin

    Read the article

  • Efficient and accurate way to compact and compare Python lists?

    - by daveslab
    Hi folks, I'm trying to a somewhat sophisticated diff between individual rows in two CSV files. I need to ensure that a row from one file does not appear in the other file, but I am given no guarantee of the order of the rows in either file. As a starting point, I've been trying to compare the hashes of the string representations of the rows (i.e. Python lists). For example: import csv hashes = [] for row in csv.reader(open('old.csv','rb')): hashes.append( hash(str(row)) ) for row in csv.reader(open('new.csv','rb')): if hash(str(row)) not in hashes: print 'Not found' But this is failing miserably. I am constrained by artificially imposed memory limits that I cannot change, and thusly I went with the hashes instead of storing and comparing the lists directly. Some of the files I am comparing can be hundreds of megabytes in size. Any ideas for a way to accurately compress Python lists so that they can be compared in terms of simple equality to other lists? I.e. a hashing system that actually works? Bonus points: why didn't the above method work?

    Read the article

  • Drawing line graphics leads Flash to spiral out of control!

    - by drpepper
    Hi, I'm having problems with some AS3 code that simply draws on a Sprite's Graphics object. The drawing happens as part of a larger procedure called on every ENTER_FRAME event of the stage. Flash neither crashes nor returns an error. Instead, it starts running at 100% CPU and grabs all the memory that it can, until I kill the process manually or my computer buckles under the pressure when it gets up to around 2-3 GB. This will happen at a random time, and without any noticiple slowdown beforehand. WTF? Has anyone seen anything like this? PS: I used to do the drawing within a MOUSE_MOVE event handler, which brought this problem on even faster. PPS: I'm developing on Linux, but reproduced the same problem on Windows. UPDATE: You asked for some code, so here we are. The drawing function looks like this: public static function drawDashedLine(i_graphics : Graphics, i_from : Point, i_to : Point, i_on : Number, i_off : Number) : void { const vecLength : Number = Point.distance(i_from, i_to); i_graphics.moveTo(i_from.x, i_from.y); var dist : Number = 0; var lineIsOn : Boolean = true; while(dist < vecLength) { dist = Math.min(vecLength, dist + (lineIsOn ? i_on : i_off)); const p : Point = Point.interpolate(i_from, i_to, 1 - dist / vecLength); if(lineIsOn) i_graphics.lineTo(p.x, p.y); else i_graphics.moveTo(p.x, p.y); lineIsOn = !lineIsOn; } } and is called like this (m_graphicsLayer is a Sprite): m_graphicsLayer.graphics.clear(); if (m_destinationPoint) { m_graphicsLayer.graphics.lineStyle(2, m_fixedAim ? 0xff0000 : 0x333333, 1); drawDashedLine(m_graphicsLayer.graphics, m_initialPos, m_destinationPoint, 10, 10); }

    Read the article

  • No class def found error for JUnit Test on android

    - by J Bellamy
    I am having some very bizarre behaviour. I have a large number of test cases for my Android application, and they all work except for one. When I run this one I get a java.lang.NoClassDefFoundError: org.JUnit.test Yes, I have the JUnit 4 library imported into the project, and my other JUnit tests are running without any problems. What is particularly bizarre is that before I hit this problem I had an error in my code- basically, I tried writing a file to a read only folder. When that occurred, the JUnitTest would execute up to the point where it would hit an IO exception for accessing a part of memory it cannot access. I fix this problem, and suddenly the Android emulator doesn't seem to know what org.JUnit.test is. I have examined the run configuration for this test class, and it is the same as my others. It is in the same folder as the other tests as well. It also uses the same import statements. Any idea on what is going on? I am using the Android 10 emulator, and eclipse version 3.7.2. Edit: To clarify, the error I get appears on Logcat and not in my Eclipse project.

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

  • Find messages from certain key till certain key while being able to remove stale keys.

    - by Alfred
    My problem Let's say I add messages to some sort of datastructure: 1. "dude" 2. "where" 3. "is" 4. "my" 5. "car" Asking for messages from index[4,5] should return: "my","car". Next let's assume that after a while I would like to purge old messages because they aren't useful anymore and I want to save memory. Let's say at time x messages[1-3] became stale. I assume that it would be most efficient to just do the deletion once every x seconds. Next my datastructure should contain: 4. "my" 5. "car" My solution? I was thinking of using a concurrentskiplistset or concurrentskiplist map. Also I was thinking of deleting the old messages from inside a newSingleThreadScheduledExecutor. I would like to know how you would implement(efficiently/thread-safe) this or maybe use a library?

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Question on passing a pointer to a structure in C to a function?

    - by worlds-apart89
    Below, I wrote a primitive singly linked list in C. Function "addEditNode" MUST receive a pointer by value, which, I am guessing, means we can edit the data of the pointer but can not point it to something else. If I allocate memory using malloc in "addEditNode", when the function returns, can I see the contents of first-next ? Second question is do I have to free first-next or is it only first that I should free? I am running into segmentation faults on Linux. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { int value; list_node_t *next; }; void addEditNode(list_node_t *node) { node->value = 10; node->next = (list_node_t*) malloc(sizeof(list_node_t)); node->next->value = 1; node->next->next = NULL; } int main() { list_node_t *first = (list_node_t*) malloc(sizeof(list_node_t)); first->value = 1; first->next = NULL; addEditNode(first); free(first); return 0; }

    Read the article

  • How to optimalize my game calendar in C#?

    - by MartyIX
    Hi, I've implemented a simple calendar (message system) for my game which consists from: 1) List<Event> calendar; 2) public class Event { /// <summary> /// When to process the event /// </summary> public Int64 when; /// <summary> /// Which object should process the event /// </summary> public GameObject who; /// <summary> /// Type of event /// </summary> public EventType what; public int posX; public int posY; public int EventID; } 3) calendar.Add(new Event(...)) The problem with this code is that even thought the number of messages is not excessise per second. It allocates still new memory and GC will once need to take care of that. The garbage collection may lead to a slight lag in my game and therefore I'd like to optimalize my code. My considerations: To change Event class in a structure - but the structure is not entirely small and it takes some time to copy it wherever I need it. Reuse Event object somehow (add queue with used events and when new event is needed I'll just take from this queue). Does anybody has other idea how to solve the problem? Thanks for suggestions!

    Read the article

  • Outlook VSTO AddIn for Meetings

    - by BigDubb
    We have created a VSTO addin for Outlook Meetings. As part of this we trap on the SendEvent of the message on the FormRegionShowing event: _apptEvents.Send += new Microsoft.Office.Interop.Outlook.ItemEvents_SendEventHandler(_apptEvents_Send); The method _apptEvents_Send then tests on a couple of properties and exits where appropriate. private void _apptEvents_Send(ref bool Cancel) { if (!_Qualified) { MessageBox.Show("Meeting has not been qualified", "Not Qualified Meeting", MessageBoxButtons.OK, MessageBoxIcon.Information); chkQualified.Focus(); Cancel = true; } } The problem that we're having is that some users' messages get sent twice. Once when the meeting is sent and a second time when the user re-opens outlook. I've looked for memory leaks, thinking that something might not be getting disposed of properly, and have added explicit object disposal on all finally calls to try and make sure resources are managed, but still getting the functionality incosistently across the organization. i.e. I never encountered the problem during development, nor other developers during testing. All users are up to date on framework (3.5 SP1) and Hotfixes for Outlook. Does anyone have any ideas on what might be causing this? Any ideas anyone might have would be greatly appreciated.

    Read the article

  • implementation musical instrument using audio unit

    - by Develop.Kim
    post same question at apple developer forum ,too hi first sorry that my english is poor.. i want develop iphone application that playing musical instrument like 'ocarina' but don't need blow mic features. so first i tried to find that how implementation 'virtual musical instrument ' in iphone development. the during the decide implementation using 'Audio Unit' to report this article (link) so i want two kind of questions. i recognize that the 'musical instrument' can be divided into three sound that 'attack', 'sustain' , 'release'. 'decay' maybe included (link) . How implementation when audio unit base 'AUInstrumentBase' each sound ? i download sample 'SinSynth' (link) . i want play note this instrument unit for analyze source and study. Is there way to using AULab? expected the way using MIDI input . but i don't have MIDI. in addition, i wonder that i would think it right the way. to ask the advice... thank for reading poor english my article.

    Read the article

  • Need help getting parent reference to child view controller

    - by Andy
    I've got the following code in one of my view controllers: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.section) { case 0: // "days" section tapped { DayPicker *dayPicker = [[DayPicker alloc] initWithStyle:UITableViewStylePlain]; dayPicker.rowLabel = self.activeDaysLabel; dayPicker.selectedDays = self.newRule.activeDays; [self.navigationController pushViewController:dayPicker animated:YES]; [dayPicker release]; break; ... Then, in the DayPicker controller, I do some stuff to the dayPicker.rowLabel property. Now, when the dayPicker is dismissed, I want the value in dayPicker.rowLabel to be used as the cell.textLabel.text property in the cell that called the controller in the first place (i.e., the cell label becomes the option that was selected within the DayPicker controller). I thought that by using the assignment operator to set dayPicker.rowLabel = self.activeDaysLabel, the two pointed to the same object in memory, and that upon dismissing the DayPicker, my first view controller, which uses self.activeDaysLabel as the cell.textLabel.text property for the cell in question, would automagically pick up the new value of the object. But no such luck. Have I missed something basic here, or am I going about this the wrong way? I originally passed a reference to the calling view controller to the child view controller, but several here told me that was likely to cause problems, being a circular reference. That setup worked, though; now I'm not sure how to accomplish the same thing "the right way." As usual, thanks in advance for your help.

    Read the article

  • How can I use io.StringIO() with the csv module?

    - by Tim Pietzcker
    I tried to backport a Python 3 program to 2.7, and I'm stuck with a strange problem: >>> import io >>> import csv >>> output = io.StringIO() >>> output.write("Hello!") # Fail: io.StringIO expects Unicode Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' >>> output.write(u"Hello!") # This works as expected. 6L >>> writer = csv.writer(output) # Now let's try this with the csv module: >>> csvdata = [u"Hello", u"Goodbye"] # Look ma, all Unicode! (?) >>> writer.writerow(csvdata) # Sadly, no. Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' According to the docs, io.StringIO() returns an in-memory stream for Unicode text. It works correctly when I try and feed it a Unicode string manually. Why does it fail in conjunction with the csv module, even if all the strings being written are Unicode strings? Where does the str come from that causes the Exception? (I do know that I can use StringIO.StringIO() instead, but I'm wondering what's wrong with io.StringIO() in this scenario)

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Is it OK to set state within Event Raising methods?

    - by Greg
    I ran across this pattern in the code of a library I'm using. It sets state within the event raising method, but only if the event is not null. protected virtual void OnMyEvent(EventArgs e) { if(MyEvent != null) { State = "Executing"; // Only sets state if MyEvent != null. MyEvent(this,e); } } Which means that the state is not set when overriding the method: protected override void OnMyEvent(EventArgs e) { base.OnMyEvent(e); Debug.Assert( State == "Executing" ); // This fails } but is only set when handling the event: foo.MyEvent += (o, args) => Debug.Assert(State == "Executing"); // This passes Setting state within the if(MyEvent != null) seems like bad form, but I've checked the Event Design Guidelines and it doesn't mention this. Do you think this code is incorrect? If so, why? (Reference to design guidelines would be helpful). Edit for Context: It's a Control, I'm trying to create subclass of it, and the state that it's setting is calling EnsureChildControls() conditionally based upon there being an event handler. I can call EnsureChildControls() myself, but I consider that something of a hack.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user568249
    I'm currently working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to either be real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 0 and SD 1. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do this. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

< Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >