Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 371/422 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • are mobile can be used as a devices to develop application

    - by Richa Media and services
    I say that we can work @ mobile using a technique and work with any IDE and use any OS without problem like work on visual studio and use Window 7 How it possible ? 1. We use Mobile like a CPU and Use a monitor to watch code. We use samsung's techniques to display on monitor. it's give signal to monitor wirelessly to display code on monitor 2 We use Wireless keyboard and Mouse (if user like USB then he also use USB keyboard and mouse) 3. We use a component inside of mobile to control all devices like internet , wi-fi bluethoth. by component user easily setup , control and use feature. 4 don't be confused. i am sure to say that we not use mobile to watch code on mobile screen and Mobile 's keyboard because it's too smaller to work so we use Monitor (LCD) to display code and a keyboard to work comfortably and freely. 5. what are you think if you see a developer who work using this way 6. it is not impossible. give me some feedback and suggestion about your thinking on this technologies.

    Read the article

  • Design by contracts and constructors

    - by devoured elysium
    I am implementing my own ArrayList for school purposes, but to spice up things a bit I'm trying to use C# 4.0 Code Contracts. All was fine until I needed to add Contracts to the constructors. Should I add Contract.Ensures() in the empty parameter constructor? public ArrayList(int capacity) { Contract.Requires(capacity > 0); Contract.Ensures(Size == capacity); _array = new T[capacity]; } public ArrayList() : this(32) { Contract.Ensures(Size == 32); } I'd say yes, each method should have a well defined contract. On the other hand, why put it if it's just delegating work to the "main" constructor? Logicwise, I wouldn't need to. The only point I see where it'd be useful to explicitly define the contract in both constructors is if in the future we have Intelisense support for contracts. Would that happen, it'd be useful to be explicit about which contracts each method has, as that'd appear in Intelisense. Also, are there any books around that go a bit deeper on the principles and usage of Design by Contracts? One thing is having knowledge of the syntax of how to use Contracts in a language (C#, in this case), other is knowing how and when to use it. I read several tutorials and Jon Skeet's C# in Depth article about it, but I'd like to go a bit deeper if possible. Thanks

    Read the article

  • How does PHP interface with Apache?

    - by Sbm007
    Hi, I've almost finished writing a HTTP/1.0 compliant web server under Java (no commercial usage as such, this is just for fun) and basically I want to include PHP support. I realize that this is no easy task at all, but I think it'll be a nice accomplishment. So I want to know how PHP exactly interfaces with the Apache web server (or any other web server really), so I can learn from it and write my own PHP wrapper. It doesn't necessarily have to be mod_php, I don't mind writing a FastCGI wrapper - which to my knowledge is capable of running PHP as well. I would've thought that all that PHP needs is the output that goes to client (so it can interpret the PHP parts), the full HTTP request from client (so it can extract POST variables and such) and the client's host name. And then you simply take the parsed PHP code and write that to the output stream. There will probably be more things, but in essence that's how I would have thought it works. From what I've gathered so far, apache2handler provides an API which PHP makes use of to 'connect' to Apache. I guess it's an idea to look at the source code for apache2handler and php5apache2.dll or so, but before I do that I thought I'd ask SO first. If anyone has more information, experience, or some sort of specification that is relevant to this then please let me know. Thanks in advance!

    Read the article

  • Android - Memory leak when dynamically building UI with image resource backgrounds

    - by Rich
    I have an Activity that I swear is leaking memory. The app I'm working on does a lot with images, so I've had to be pretty stingy with memory when working directly with Bitmaps. I added an Activity, and now if you use this new Activity it basically puts me over the edge with mem usage and I end up throwing the "Bitmap exceeds VM budget" exception. If you never launch this Activity, everything is smooth as it was previously. I started reading about memory leaks, and I think that I have a similar situation to what is described in the article in the Android docs. I'm dynamically creating a bunch of image views and adding a BackgroundDrawable from the resources and adding an OnClickListener as well. I imagine I have to do some cleanup when the Activity hits onPause in its life cycle, but I'd like to know specifically what is the correct way. Here is the code that should demonstrate the objects I'm working with... LinearLayout templateContainer; . . . ImageView imgTemplatePreview = (ImageView) item.findViewById(R.id.imgTemplatePreview); . . . imgTemplatePreview.setBackgroundDrawable(getResources().getDrawable(previewId)); imgTemplatePreview.setOnClickListener(imgClick); templateContainer.addView(item);

    Read the article

  • Memory Leakage using datatables

    - by Vix
    Hi, I have situation in which i'm compelled to retrieve 30,000 records each to 2 datatables.I need to do some manipulations and insert into records into the SQL server in Manipulate(dt1,dt2) function.I have to do this in 15 times as you can see in the for loop.Now I want to know what would be the effective way in terms of memory usage.I've used the first approach.Please suggest me the best approach. (1) for (int i = 0; i < 15; i++) { DataTable dt1 = GetInfo(i); DataTable dt2 = GetData(i); Manipulate(dt1,dt2); } (OR) (2) DataTable dt1 = new DataTable(); DataTable dt2 = new DataTable(); for (int i = 0; i < 15; i++) { dt1=null; dt2=null; dt1 = GetInfo(); dt2 = GetData(); Manipulate(dt1, dt2); } Thanks, Vix.

    Read the article

  • DirectX: Game loop order, draw first and then handle input?

    - by Ricket
    I was just reading through the DirectX documentation and encountered something interesting in the page for IDirect3DDevice9::BeginScene : To enable maximal parallelism between the CPU and the graphics accelerator, it is advantageous to call IDirect3DDevice9::EndScene as far ahead of calling present as possible. I've been accustomed to writing my game loop to handle input and such, then draw. Do I have it backwards? Maybe the game loop should be more like this: (semi-pseudocode, obviously) while(running) { d3ddev->Clear(...); d3ddev->BeginScene(); // draw things d3ddev->EndScene(); // handle input // do any other processing // play sounds, etc. d3ddev->Present(NULL, NULL, NULL, NULL); } According to that sentence of the documentation, this loop would "enable maximal parallelism". Is this commonly done? Are there any downsides to ordering the game loop like this? I see no real problem with it after the first iteration... And I know the best way to know the actual speed increase of something like this is to actually benchmark it, but has anyone else already tried this and can you attest to any actual speed increase?

    Read the article

  • Application that depends heavily on stored procedures

    - by PieterG
    We currently have an application that depends largely on stored procedures. There is a heavy use of temp tables. It's an extremely large application. Facing this situation, I would like to use Entity Framework or Linq2Sql for a rewrite. I might consider using Fluent Hibernate or Subsonic, as i've used them quite extensively in the past. I've had problems with Linq2Sql generating the return types for the stored procedures because of the usage of the temp tables, and I think it's cumbersome to go and change all the stored procedures from temp tables to in-memory tables. Considering the 2 choices that I want to make, which one of the 2 is the best route to go and why? If my choices are extremely idiotic, please provide alternatives. Edit: The reason for the question and the change is that the data access layer is non-existent and was built 10 years ago. We currently still run into a lot of issues with it. I don't want to divulge too much, but if you saw it, your eyes would start bleeding :)

    Read the article

  • What are the original reasons for ToString() in Java and .NET?

    - by d.
    I've used ToString() modestly in the past and I've found it very useful in many circumstances. However, my usage of this method would hardly justify to put this method in none other than System.Object. My wild guess is that, at some point during the work carried out and meetings held to come up with the initial design of the .NET framework, it was decided that it was necessary - or at least extremely useful - to include a ToString() method that would be implemented by everything in the .NET framework. Does anyone know what the exact reasons were? Am I missing a ton of situations where ToString() proves useful enough as to be part of System.Object? What were the original reasons for ToString()? Thanks a lot! PS - Again: I'm not questioning the method or implying that it's not useful, I'm just curious to know what makes it SO useful as to be placed in System.Object. Side note - Imagine this: AnyDotNetNativeClass someInitialObject = new AnyDotNetNativeClass([some constructor parameters]); AnyDotNetNativeClass initialObjectFullCopy = AnyDotNetNativeClass.FromString(someInitialObject.ToString()); Wouldn't this be cool? EDIT(1): (A) - Based on some answers, it seems that .NET languages inherited this from Java. So, I'm adding "Java" to the subject and to the tags as well. If someone knows the reasons why this was implemented in Java then please shed some light! (B) - Static hypothetical FromString vs Serialization: sure, but that's quite a different story, right?

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • "|" pipe operator not working in command line in C++

    - by user332024
    I am having a windows application interacting with DB2 database. In my application i have code to execute some DB2 commands through command line interface. I have used windowsAPI "ShellExecuteEx()" to execute those DB2 commands through command line. Following is the code written to execute DB2 command through command line. string command = "/c /w /i DB2 UNCATALOG NODE DB_DATABASE "" test.log | echo %date% %time% test.log SHELLEXECUTEINFO shellInfo; ZeroMemory(&shellInfo, sizeof(shellInfo)); shellInfo.cbSize = sizeof(shellInfo); shellInfo.fMask = SEE_MASK_FLAG_NO_UI | SEE_MASK_NOCLOSEPROCESS; //shellInfo.lpFile = "db2cmd"; shellInfo.lpFile = "db2cmd"; shellInfo.lpParameters = command.c_str(); The code is executed successfully , however if test.log is observered i only get result of DB2 command and not date and time. If you see the above command there is "|" pipe operator and echo command to log date and time in test.log Please note that if I execute above DB2 command through separately command line i.e. not through code. I am able to view date and time log along with DB2 command result in test.log. Following is the full command which i executed through command line. DB2CMD /c /i /w DB2 UNCATALOG NODE DB_DATABASE "" test.log | echo %date% %time% test.log According to me since DB2 command is executed successfully through code, there is problem with only usage of "|" pipe operator or echo command.

    Read the article

  • Disabling browser print options (headers, footers, margins) from page?

    - by Anthony
    I have seen this question asked in a couple of different ways on SO and several other websites, but most of them are either too specific or out-of-date. I'm hoping someone can provide a definitive answer here without pandering to speculation. Is there a way, either with CSS or javascript, to change the default printer settings when someone prints within their browser? And of course by "prints from their browser" I mean some form of HTML, not PDF or some other plug-in reliant mime-type. Please note: If some browsers offer this and others don't (or if you only know how to do it for some browsers) I welcome browser-specific solutions. Similarly, if you know of a mainstream browser that has specific restrictions against EVER doing this, that is also helpful, but some fairly up-to-date documentation would be appreciated. (simply saying "that goes against XYZ's security policy" isn't very convincing when XYZ has made significant changes in said policy in the last three years). Finally, when I say "change default print settings" I don't mean forever, just for my page, and I am referring specifically to print margins, headers, and footers. I am very aware that CSS offers the option of changing the page orientation as well as the page margins. One of the many struggles is with Firefox. If I set the page margins to 1 inch, it ADDS this to the half inch it already puts into place. I very much want to reduce the usage of PDFs on my client's site, but the infringement on presentation (as well as the lack of reliability) are their main concern.

    Read the article

  • Understanding Async Concept in WebServices

    - by 8EM
    I've had the thrill recently of developing web service applications. Most of my experience is with GWT and mainly doing most things on the client side then doing an async call back for any additional data needed. However at the moment, I want a process that will be triggered on the client side, then on the server side, a loop will occur, where if a certain condition is met, it will 'push' back to the client. This will hopefully remove the processor usage on the client side and also saves bandwidth. What is this called? I understand 'polling' is where the client side continuously hits a server, however what I want is the opposite. Is this possible? Am I misunderstanding what happened when I trigger an AsyncService in GWT? Please advise. EDIT: Just for further clarification: Having some kind of weather data service. Where, you trigger 'go' on the client side, then on the server side, it checks to see the degrees, if it has moved since last time, it will spit back the degrees to the client, if it hasn't, it will keep looping.

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • power and modulo on the fly for big numbers

    - by user unknown
    I raise some basis b to the power p and take the modulo m of that. Let's assume b=55170 or 55172 and m=3043839241 (which happens to be the square of 55171). The linux-calculator bc gives the results (we need this for control): echo "p=5606;b=55171;m=b*b;((b-1)^p)%m;((b+1)^p)%m" | bc 2734550616 309288627 Now calculating 55170^5606 gives a somewhat large number, but since I have to do a modulooperation, I can circumvent the usage of BigInt, I thought, because of: (a*b) % c == ((a%c) * (b%c))%c i.e. (9*7) % 5 == ((9%5) * (7%5))%5 => 63 % 5 == (4 * 2) %5 => 3 == 8 % 5 ... and a^d = a^(b+c) = a^b * a^c, therefore I can divide b+c by 2, which gives, for even or odd ds d/2 and d-(d/2), so for 8^5 I can calculate 8^2 * 8^3. So my (defective) method, which always cut's off the divisor on the fly looks like that: def powMod (b: Long, pot: Int, mod: Long) : Long = { if (pot == 1) b % mod else { val pot2 = pot/2 val pm1 = powMod (b, pot, mod) val pm2 = powMod (b, pot-pot2, mod) (pm1 * pm2) % mod } } and feeded with some values, powMod (55170, 5606, 3043839241L) res2: Long = 1885539617 powMod (55172, 5606, 3043839241L) res4: Long = 309288627 As we can see, the second result is exactly the same as the one above, but the first one looks quiet different. I'm doing a lot of such calculations, and they seem to be accurate as long as they stay in the range of Int, but I can't see any error. Using a BigInt works as well, but is way too slow: def calc2 (n: Int, pri: Long) = { val p: BigInt = pri val p3 = p * p val p1 = (p-1).pow (n) % (p3) val p2 = (p+1).pow (n) % (p3) print ("p1: " + p1 + " p2: " + p2) } calc2 (5606, 55171) p1: 2734550616 p2: 309288627 (same result as with bc) Can somebody see the error in powMod?

    Read the article

  • Flash browser game - HTTP + PHP vs Socket + Something else

    - by Maurycy Zarzycki
    I am developing a non-real time browser RPG game (think Kingdom of Loathing) which would be played from within a Flash app. At first I just wanted to make the communication with server using simply URLLoader to tell PHP what I am doing, and using $_SESSION to store data needed in-between request. I wonder if it wouldn't be better to base it on a socket connection, an app residing on a server written in Java or Python. The problem is I have never ever written such an app so I have no idea how much I'd have to "shift" my thoughts from simple responding do request (like PHP) to continuously working application. I won't hide I am also concerned about the memory and CPU usage of such Server app, when for example there would be hundreds of users connected. I've done some research. I have tried to do some research, but thanks to my nil knowledge on the sockets subject I haven't found anything helpful. So, considering the fact I don't need real time data exchange, will it be wise to develop the server side part as socket server, not in plain ol' PHP?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • How to make a plot from summaryRprof?

    - by ThorDivDev
    This is a question for an university assignment. I was given three algorithms to calculate the GCD that I already did. My problem is getting the Rprof results to a plot so I can compare them side by side. From what little understanding I have about Rprof, summaryRprof and plot is that Rprof is used like this: Rprof() #To start #functions here Rprof(NULL) #TO end summaryRprof() # to print results I understand that plot has many different types of inputs, x and y values and something called a data frame which I assume is a fancy word for table. and to draw different lines and things I need to use this: http://www.harding.edu/fmccown/r/ what I cant figure out is how to get the summaryRprof results to the plot() function. > Rprof(filename="RProfOut2.out", interval=0.0001) > gcdBruteForce(10000, 33) [1] 1 > gcdEuclid(10000, 33) [1] 1 > gcdPrimeFact(10000, 33) [1] 1 > Rprof(NULL) > summaryRprof() ?????plot???? I have been reading on stack overflow that and other sites that I can also try to use profr and proftools although I am not very clear on the usage. The only graph I have been able to make is one using plot(system.time(gcdFunction(10,100))) As always any help is appreciated.

    Read the article

  • How do you hide a Swing Popup when you click somewhere else.

    - by Casey Watson
    I have a Popup that is shown when a user clicks on a button. I would like to hide the popup when any of the following events occur: The user clicks somewhere else in the application. (The background panel for example) The user minimizes the application. The JPopupMenu has this behavior, but I need more than just JMenuItems. The following code block is a simplified illustration to demonstrate the current usage. import java.awt.*; import java.awt.event.ActionEvent; import javax.swing.*; public class PopupTester extends JFrame { public static void main(String[] args) { final PopupTester popupTester = new PopupTester(); popupTester.setLayout(new FlowLayout()); popupTester.setSize(300, 100); popupTester.add(new JButton("Click Me") { @Override protected void fireActionPerformed(ActionEvent event) { Point location = getLocationOnScreen(); int y = (int) (location.getY() + getHeight()); int x = (int) location.getX(); JLabel myComponent = new JLabel("Howdy"); Popup popup = PopupFactory.getSharedInstance().getPopup(popupTester, myComponent, x, y); popup.show(); } }); popupTester.add(new JButton("No Click Me")); popupTester.setVisible(true); popupTester.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } }

    Read the article

  • How can flash call jquery function in its event

    - by user2955639
    I want jquery to do something during some of the events when an audio is playing. So I'm coding a function like this <script> $.fn.playMedia = function(options){ var opts = $.extend({}, { swfSrc: '' timeUpdated: function(currentTime){}, startPlay: function(){}, endPlay: function(){} }, options); return $(this).each(function(){ // call flash to play the media whose src is opts.swfSrc // Is it possible that flash can call the js functions(opts.timeUpdate, opts.startPlay and opts.endPlay) at each time of the event is triggered? }); }}; </script> // Usage <div id="player"></div> <script> $('#player').playMedia({ swfSrc: '/path/song.mp3', timeUpdated: function(currentTime){ comsole.log(currentTime); } }); </script> I'm a totally layman of flash, I just guess this works. Hope someone could tell me how to make up a swf file for this jquery function. Or is there any existing jquery plugin which does this thing but can re-design apperance flexibly. Thank you very much!

    Read the article

  • Javascript inheritance: call super-constructor or use prototype chain?

    - by Jeremy S.
    Hi folks, quite recently I read about javascript call usage in MDC https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/call one linke of the example shown below, I still don't understand. Why are they using inheritance here like this Prod_dept.prototype = new Product(); is this necessary? Because there is a call to the super-constructor in Prod_dept() anyway, like this Product.call is this just out of common behaviour? When is it better to use call for the super-constructor or use the prototype chain? function Product(name, value){ this.name = name; if(value >= 1000) this.value = 999; else this.value = value; } function Prod_dept(name, value, dept){ this.dept = dept; Product.call(this, name, value); } Prod_dept.prototype = new Product(); // since 5 is less than 1000, value is set cheese = new Prod_dept("feta", 5, "food"); // since 5000 is above 1000, value will be 999 car = new Prod_dept("honda", 5000, "auto"); Thanks for making things clearer

    Read the article

  • programs hangs during socket interaction

    - by herrturtur
    I have two programs, sendfile.py and recvfile.py that are supposed to interact to send a file across the network. They communicate over TCP sockets. The communication is supposed to go something like this: sender =====filename=====> receiver sender <===== 'ok' ======= receiver or sender <===== 'no' ======= receiver if ok: sender ====== file ======> receiver I've got The sender and receiver code is here: Sender: import sys from jmm_sockets import * if len(sys.argv) != 4: print "Usage:", sys.argv[0], "<host> <port> <filename>" sys.exit(1) s = getClientSocket(sys.argv[1], int(sys.argv[2])) try: f = open(sys.argv[3]) except IOError, msg: print "couldn't open file" sys.exit(1) # send filename s.send(sys.argv[3]) # receive 'ok' buffer = None response = str() while 1: buffer = s.recv(1) if buffer == '': break else: response = response + buffer if response == 'ok': print 'receiver acknowledged receipt of filename' # send file s.send(f.read()) elif response == 'no': print "receiver doesn't want the file" # cleanup f.close() s.close() Receiver: from jmm_sockets import * s = getServerSocket(None, 16001) conn, addr = s.accept() buffer = None filename = str() # receive filename while 1: buffer = conn.recv(1) if buffer == '': break else: filename = filename + buffer print "sender wants to send", filename, "is that ok?" user_choice = raw_input("ok/no: ") if user_choice == 'ok': # send ok conn.send('ok') #receive file data = str() while 1: buffer = conn.recv(1) if buffer=='': break else: data = data + buffer print data else: conn.send('no') conn.close() I'm sure I'm missing something here in the sorts of a deadlock, but don't know what it is.

    Read the article

  • Windows 2008 VPS hosting experiences

    - by Luke Bennett
    Whilst similar questions exist, I couldn't find any which quite match my request. I'm looking for hosting for some personal .NET projects which for various reasons I do not want to host on our servers at work. I need to be able to host multiple sites and for that reason I'm thinking of a VPS with RDP access for the time being - don't fancy shared hosting as I feel that doesn't offer me the flexbility and control I'm looking for. What experiences do people have of Windows 2008 VPS providers? I've come across a few possibilities although it seems a lot of places are still on Windows 2003 with 2008 'coming soon'. Is VPS the best way to go? Eventually (depending on how the projects take off) I intend to get a dedicated box but at this stage it's not cost-effective. Also, what are people's experiences of running SQL Server Express on a VPS? What would you say the minimum requirements are for CPU/memory? I know it's not going to be anywhere near as performant as SQL Server 2005/8 running on a dedicated box but I'm hoping it will be an acceptable starting point. Any other tips/advice also welcome! Edit: Forgot to mention, I'm ideally looking for UK hosting although I'm open to alternatives.

    Read the article

  • Small openmp programm freezes sometimes (gcc, c, linux)

    - by osgx
    Hello Just write a small omp test, and it does not work correctly all the times: #include <omp.h> int main() { int i,j=0; #pragma omp parallel for(i=0;i<1000;i++) { #pragma omp barrier j+= j^i; } return j; } The usage of j for writing from all threads is incorrect in this example, BUT there must be only nondeterministic value of j I have a freeze. Compiled with gcc-4.3.1 -fopenmp a.c -o gcc -static Run on 4-core x86_Core2 Linux server: $ ./gcc and got freeze (sometimes; like 1 freeze for 4-5 fast runs). Strace: [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] <... futex resumed> ) = 0 [pid 13119] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13120] futex(0x80cd798, FUTEX_WAIT, 1, NULL <unfinished ...> [pid 13109] <... futex resumed> ) = 0 [pid 13109] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13109] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13119] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13119] futex(0x80d3014, FUTEX_WAKE, 1) = 0 [pid 13119] futex(0x80d3020, FUTEX_WAIT, 251, NULL <freeze> Why do I have a freeze (deadlock)?

    Read the article

  • C++ RPC library suggestions

    - by Oxsnarder
    I'm looking for suggestions regarding RPC libraries implemented in C++, for C++ developers. Some requirements constraints: Should work on both linux/unix and win32 systems Be able to execute free function and class methods Hopefully written in modern C++ not 90's/java-esque C++ Be able to function over networks and hetrogenous architectures Not too slow or inefficient Hopefully provide interfaces for TR1 style std::function's et al. My example usage is to invoke the free function foo on a remote machine. ---snip--- // foo translation unit int foo(int i, int j) { return i + j; } ---snip--- ---snip--- // client side main int main() { //register foo on client and server //setup necassary connections and states int result; if (RPCmechanism.invoke("foo",4,9,result)) std::cout << "foo(4,9) = " result << std::endl; else std::cout << "failed to invoke foo(4,9)!" << std::endl; return 0; } ---snip--- Something that can achieve the above or similar would be great.

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >