Search Results

Search found 20970 results on 839 pages for 'real mode'.

Page 532/839 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • Automatically deleting pyc files when corresponding py is moved (Mercurial)

    - by Oddthinking
    (I foresaw this problem might happen 3 months ago, and was told to be diligent to avoid it. Yesterday, I was bitten by it, hard, and now that it has cost me real money, I am keen to fix it.) If I move one of my Python source files into another directory, I need to remember to tell Mercurial that it moved (hg move). When I deploy the new software to my server with Mercurial, it carefully deletes the old Python file and creates it in the new directory. However, Mercurial is unaware of the pyc file in the same directory, and leaves it behind. The old pyc is used preferentially over new python file by other modules in the same directory. What ensues is NOT hilarity. How can I persuade Mercurial to automatically delete my old pyc file when I move the python file? Is there another better practice? Trying to remember to delete the pyc file from all the Mercurial repositories isn't working.

    Read the article

  • Can we run windowservice or EXE in Azure website or in Virtual Machine?

    - by Arun Rana
    I have experienced with cloud service/hosted service on Azure. However regarding another project i am confused in selection in terms of functionalities. I have project (2 tier asp.net app) with that i need to run windowservice or exe which will do some functionality every day (like fetch data) so my confusions are as below Regarding Azure website Can i access RDP if i'll move to reserved instance? can i run windowservice/exe ? Regarding Virtual Machine Is it same as dedicated server? can i use WASD as database from application reside in same? I think i can run any exe and installed anything however azure is going to recycle this and if yes then what happened on recycling? can i use new window server 2012 (VHD) in that? Azure website & VM both are in preview mode so is it reliable to use it as production version?

    Read the article

  • Bitbanging a PIO on Coldfire/ucLinux

    - by G Forty
    Here's the problem: I need to program some hardware via 2 pins of the PIO (1 clock, 1 data). Timing constraints are tight - 10ms clock cycle time. All this, of course, whilst I maintain very high level services (CAN bus, TCP/IP). The downstream unit also ACKS by asserting a PIO pin, configured as an input, high. So this loop has to both read and write. I need to send 16 bits in the serial stream. Is there an established way to do this sort of thing or should I simply get the hardware guys to add a PIC or somesuch. I'd much prefer to avoid exotics like RTAI extensions at this stage. I did once see a reference to user-mode IO which implied a possible interrupt driven driver but lost track of it. Any pointers welcomed.

    Read the article

  • Multi-Threading - Cleanup strategy at program end

    - by weismat
    What is the best way to finish a multi-threaded application in a clean way? I am starting several socket connections from the main thread in seperate sockets and wait until the end of my business day in the main thread and use currently System.Environment.Exit(0) to terminate it. This leads to an unhandled execption in one of the childs. Should I stop the threads from the list? I have been reluctant to implement any real stopping in the childs yet, thus I am wondering about the best practice. The sockets are all wrapped nicely with proper destructors for logging out and closing, but it still leads to errors.

    Read the article

  • HSQLDB Constraint Violation & SQL Query Log for an HSQLDB in-memory setup

    - by shipmaster
    We have a setup where we are using an embedded HSQLDB for backing Hibernate/JPA unit tests in java, and we are using the in-memory database mode since we simply want the database thrown away after the test run. My problem is that one of the tests is failing due to a constraint violation and HSQLDB lists the column as SYS_CT_286, and the query that appears in the log is the prepared statement where I cant see what the actual parameter values are (they are replaced by '?'). My questions are: 1- Is there a way in which I can see the actual SQL being executed? (like the mysql query log for example?). 2- What exactly is SYS_CT_286? it is not one of my columns, is it a generated column? is there something obvious that may be wrong with it? Thanks.

    Read the article

  • How to edit localized forms all at one time in Visual Studio

    - by SoMoS
    Hello, I have several forms that are localized to multiple languages. If I do a change on one form (for example, changing the size of a textbox) the change is done only on the localized version of the Form that I have currently selected. Is there a way of extending the change I've done the different localized versions of the same Form to avoid having to go one by one doing the same change by hand? Thanks in advance for your help. EDIT: I'm talking about different forms when the real thign is that you have one form and several resources. The point is that at the end is just like if you have different forms for each locale because the form is built with the data from the resource. The problem is still the same because the edits done on the form are stored at one resource file and I have to put by hand those edits in all the resources.

    Read the article

  • Recover backup copy of a ubuntu linux installation on a usb stick using dd

    - by Werner
    Hi, I installed Ubuntu 10.04 on a usb stick in persistent install mode. So I could boot the laptop or my desktop computer with the stick, at boot time. Once I needed the 8GB stick for another purposes so I thought about coyping it to my desktop doing from mac os x: dd if=/dev/disks3s of=/Users/jack/Desktop/usb_copy Now I am trying to do the opposite, after having used the stick, which was formatted to NTFS, just doing dd if=/Users/jack/Desktop/usb_copy of=/dev/disks3s but although I can see that almost of the files are there, I can not boot again. IT is also strange the the file permissions are kind of strange, something like _user What can I do ? Thanks

    Read the article

  • Unique identifiers for users

    - by Christopher McCann
    If I have a table of a hundred users normally I would just set up an auto-increment userID column as the primary key. But if suddenly we have a million users or 5 million users then that becomes really difficult because I would want to start becoming more distributed in which case an auto-increment primary key would be useless as each node would be creating the same primary keys. Is the solution to this to use natural primary keys? I am having a real hard time thinking of a natural primary key for this bunch of users. The problem is they are all young people so they do not have national insurance numbers or any other unique identifier I can think of. I could create a multi-column primary key but there is still a chance, however miniscule of duplicates occurring. Does anyone know of a solution? Thanks

    Read the article

  • IMB_ibImageFromMemory: unknown fileformat?

    - by Antoni4040
    Here's my add-on: import bpy import os import sys import subprocess import threading class ExportToGIMP(bpy.types.Operator): bl_idname = "uv.exporttogimp" bl_label = "Export to GIMP" def execute(self, context): self.filepath = os.path.join(os.path.dirname(bpy.data.filepath), "Layout") bpy.ops.uv.export_layout(filepath=self.filepath, check_existing=True, export_all=False, modified=False, mode='PNG', size=(1024, 1024), opacity=0.25, tessellated=False) self.files = os.path.dirname(bpy.data.filepath) cmd = " (python-fu-bgsync RUN-NONINTERACTIVE)" subprocess.Popen(['gimp', '-b', cmd]) self.update() return {'FINISHED'}; def update(self): self.thread = threading.Timer(3.0, self.update).start() self.filepath2 = "/home/antoni4040/????afa/Layout1.png" bpy.ops.image.open(filepath=self.filepath2, filter_blender=False, filter_image=True, filter_movie=False, filter_python=False, filter_font=False, filter_sound=False, filter_text=False, filter_btx=False, filter_collada=False, filter_folder=True, filemode=9, relative_path=False) tex = bpy.data.textures.new(name = self.filepath2, type = "IMAGE") def exporttogimp_menu(self, context): self.layout.operator(ExportToGIMP.bl_idname, text="Export To GIMP") bpy.utils.register_class(ExportToGIMP) bpy.types.IMAGE_MT_uvs.append(exporttogimp_menu) But I can't load an image, because I get this: Reached EOF while decoding PNG IMB_ibImageFromMemory: unknown fileformat (/home/antoni4040/????afa/Layout1.png) What is that?

    Read the article

  • Python: How do I create a reference to a reference?

    - by KCArpe
    Hi, I am traditionally a Perl and C++ programmer, so apologies in advance if I am misunderstanding something trivial about Python! I would like to create a reference to a reference. Huh? Ok. All objects in Python are actually references to the real object. So, how do I create a reference to this reference? Why do I need/want this? I am overriding sys.stdout and sys.stderr to create a logging library. I would like a (second-level) reference to sys.stdout. If I could create a reference to a reference, then I could create a generic logger class where the init function receives a reference to a file handle reference that will be overrided, e.g., sys.stdout or sys.stderr. Currently, I must hard-code both values. Cheers, Kevin

    Read the article

  • Design - Where should objects be registered when using Windsor

    - by Fredrik Jansson
    I will have the following components in my application DataAccess DataAccess.Test Business Business.Test Application I was hoping to use Castle Windsor as IoC to glue the layers together but I am bit uncertain about the design of the gluing. My question is who should be responsible for registering the objects into Windsor? I have a couple of ideas; Each layer can register its own objects. To test the BL, the test bench could register mock classes for the DAL. Each layer can register the object of its dependencies, e.g. the business layer registers the components of the data access layer. To test the BL, the test bench would have to unload the "real" DAL object and register the mock objects. The application (or test app) registers all objects of the dependencies. Can someone help me with some ideas and pros/cons with the different paths? Links to example projects utilizing Castle Windsor in this way would be very helpful.

    Read the article

  • Linux core dumps are too large!

    - by themoondothshine
    Hey guys, Recently I've been noticing an increase in the size of the core dumps generated by my application. Initially, they were just around 5MB in size and contained around 5 stack frames, and now I have core dumps of 2GBs and the information contained within them are no different from the smaller dumps. Is there any way I can control the size of core dumps generated? Shouldn't they be at least smaller than the application binary itself? Binaries are compiled in this way: Compiled in release mode with debug symbols (ie, -g compiler option in GCC). Debug symbols are copied onto a separate file and stripped from the binary. A GNU debug symbols link is added to the binary. At the beginning of the application, there's a call to setrlimit which sets the core limit to infinity -- Is this the problem?

    Read the article

  • Processing files with C# in folders whose names contain spaces

    - by Nigel Ainscoe
    There are plenty of C# samples that show how to manipulate files and directories but they inevitably use folder paths that contain no spaces. In the real world I need to be able to process files in folders with names that contain spaces. I have written the code below which shows how I have solved the problem. However it doesn't seem to be very elegant and I wonder if anyone has a better way. class Program { static void Main(string[] args) { var dirPath = @args[0] + "\\"; string[] myFiles = Directory.GetFiles(dirPath, "*txt"); foreach (var oldFile in myFiles) { string newFile = dirPath + "New " + Path.GetFileName(oldFile); File.Move(oldFile, newFile); } Console.ReadKey(); } } Regards, Nigel Ainscoe

    Read the article

  • First Call to a Controller, Constant is defined, Second call, "uninitialized constant Oauth"?

    - by viatropos
    I am trying to get the OAuth gem to work with Rails 3 and I'm running into this weird problem... (independent of the gem, I think I've run into this once before) I have a controller called "OauthTestController", and a model called "ConsumerToken". The model looks like this. require 'oauth/models/consumers/token' class ConsumerToken < ActiveRecord::Base include Oauth::Models::Consumers::Token end When I go to "/oauth_test/twitter", it loads the Oauth::Models::Consumers::Token module and I'm able to connect to twitter no problem. But the second time I try it (just refresh the /oauth_test/twitter url), it gives me this error: NameError (uninitialized constant Oauth): app/models/consumer_token.rb:4 app/models/twitter_token.rb:2 app/controllers/oauth_test_controller.rb:66:in `load_consumer' Why is that? It has something to do with load paths or being in development mode maybe?

    Read the article

  • Cure for puzzle piece programming habits?

    - by Recursion
    Even though I went to a decent CS school, I was still taught with the mentality of programming with puzzle pieces. By puzzle pieces I mean, looking up code segments at each step of the development process and adding them together as needed. Eventually gathering all of the pieces and having a properly working program. So as an example, if in my program the next step is to tokenize a string, I go to google and search "how do I tokenize a string in language". All instead of critically thinking about its implementation. I personally don't think its a very good way to program and I always seem to forget everything that I have searched for. So how can I get out of this puzzle piece mode of programmer that I was taught.

    Read the article

  • Modern, Non-trivial, Pygame Tutorials?

    - by Gregg Lind
    What are some 'good', non-trivial Pygame tutorials? I realize good is relative. As an example, a good one (to me) is the one that describes how to use pygame.camera. It's recent uses a modern PyGame (1.9) non-trivial, in that it shows how to use it the module for a real application. I'd like to find others. A lot of the ones on the Pygame site are from 1.3 era or earlier! Info on related projects, like Gloss is welcome as well. (If your answer is "read the source of some pygame games", please link to the source of particular ones and note what is good about them)

    Read the article

  • Detecting incorrect key using AES/GCM in JAVA

    - by 4r1y4n
    I'm using AES to encrypt/decrypt some files in GCM mode using BouncyCastle. While I'm proving wrong key for decryption there is no exception. How should I check that the key is incorrect? my code is this: SecretKeySpec incorrectKey = new SecretKeySpec(keyBytes, "AES"); IvParameterSpec ivSpec = new IvParameterSpec(ivBytes); Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding", "BC"); byte[] block = new byte[1048576]; int i; cipher.init(Cipher.DECRYPT_MODE, incorrectKey, ivSpec); BufferedInputStream fis=new BufferedInputStream(new ProgressMonitorInputStream(null,"Decrypting ...",new FileInputStream("file.enc"))); BufferedOutputStream ro=new BufferedOutputStream(new FileOutputStream("file_org")); CipherOutputStream dcOut = new CipherOutputStream(ro, cipher); while ((i = fis.read(block)) != -1) { dcOut.write(block, 0, i); } dcOut.close(); fis.close(); thanks

    Read the article

  • Fibre channel long distance woes

    - by Marki
    I need a fresh pair of eyes. We're using a 15km fibre optic line across which fibrechannel and 10GbE is multiplexed (passive optical CWDM). For FC we have long distance lasers suitable up to 40km (Skylane SFCxx0404F0D). The multiplexer is limited by the SFPs which can do max. 4Gb fibrechannel. The FC switch is a Brocade 5000 series. The respective wavelengths are 1550,1570,1590 and 1610nm for FC and 1530nm for 10GbE. The problem is the 4GbFC fabrics are almost never clean. Sometimes they are for a while even with a lot of traffic on them. Then they may suddenly start producing errors (RX CRC, RX encoding, RX disparity, ...) even with only marginal traffic on them. I am attaching some error and traffic graphs. Errors are currently in the order of 50-100 errors per 5 minutes when with 1Gb/s traffic. Optics Here is the power output of one port summarized (collected using sfpshow on different switches) SITE-A units=uW (microwatt) SITE-B ********************************************** FAB1 SW1 TX 1234.3 RX 49.1 SW3 1550nm (ko) RX 95.2 TX 1175.6 FAB2 SW2 TX 1422.0 RX 104.6 SW4 1610nm (ok) RX 54.3 TX 1468.4 What I find curious at this point is the asymmetry in the power levels. While SW2 transmits with 1422uW which SW4 receives with 104uW, SW2 only receives the SW4 signal with similar original power only with 54uW. Vice versa for SW1-3. Anyway the SFPs have RX sensitivity down to -18dBm (ca. 20uW) so in any case it should be fine... But nothing is. Some SFPs have been diagnosed as malfunctioning by the manufacturer (the 1550nm ones shown above with "ko"). The 1610nm ones apparently are ok, they have been tested using a traffic generator. The leased line has also been tested more than once. All is within tolerances. I'm awaiting the replacements but for some reason I don't believe it will make things better as the apparently good ones don't produce ZERO errors either. Earlier there was active equipment involved (some kind of 4GFC retimer) before putting the signal on the line. No idea why. That equipment was eliminated because of the problems so we now only have: the long distance laser in the switch, (new) 10m LC-SC monomode cable to the mux (for each fabric), the leased line, the same thing but reversed on the other side of the link. FC switches Here is a port config from the Brocade portcfgshow (it's like that on both sides, obviously) Area Number: 0 Speed Level: 4G Fill Word(On Active) 0(Idle-Idle) Fill Word(Current) 0(Idle-Idle) AL_PA Offset 13: OFF Trunk Port ON Long Distance LS VC Link Init OFF Desired Distance 32 Km Reserved Buffers 70 Locked L_Port OFF Locked G_Port OFF Disabled E_Port OFF Locked E_Port OFF ISL R_RDY Mode OFF RSCN Suppressed OFF Persistent Disable OFF LOS TOV enable OFF NPIV capability ON QOS E_Port OFF Port Auto Disable: OFF Rate Limit OFF EX Port OFF Mirror Port OFF Credit Recovery ON F_Port Buffers OFF Fault Delay: 0(R_A_TOV) NPIV PP Limit: 126 CSCTL mode: OFF Forcing the links to 2GbFC produces no errors, but we bought 4GbFC and we want 4GbFC. I don't know where to look anymore. Any ideas what to try next or how to proceed? If we can't make 4GbFC work reliably I wonder what the people working with 8 or 16 do... I don't assume that "a few errors here and there" are acceptable. Oh and BTW we are in contact with everyone of the manufacturers (FC switch, MUX, SFPs, ...) Except for the SFPs to be changed (some have been changed before) nobody has a clue. Brocade SAN Health says the fabric is ok. MUX, well, it's passive, it's only a prism, nature at it's best. Any shots in the dark? APPENDIX: Answers to your questions @Chopper3: This is the second generation of Brocades exhibiting the problem. Before we had 5000s, now we have 5100s. In the beginning when we still had the active MUX we rented a longdistance laser once to put it into the switch directly in order to make tests for a day, during that day of course it was clean. But as I said, sometimes it's clean just like that. And sometimes it's not. Alternative switches would mean to rebuild the entire SAN with those only to test. Alternative SFPs, well they're hard to come by just like that. @longneck: The line is rented. It's a dark fibre (9um monomode) so there's noone else on it. Sure there are splices. I can't go and look but I have to trust they have been done correctly. As I said the line has been checked and rechecked (using an optical time-domain reflectometer). Obviously you don't have all this equipment yourself because it's way too expensive. @mdpc: What would be the "wrong" type of cable according to you? Up to the switch everything is monomode, yes. The connectors are the correct ones too. Yeah I know there are the green ones where the fibre is cut off at a certain angle etc. But we have the correct ones for all that I know. Progress Report #1 We have had two fabrics (=2x2 switches) with Brocade 5100s with FabricOS 6.4.1 and two fabrics (another 2x4 switches) on FabricOS 7.0.2. On the longdistance ISLs (one in each fabric) it turned out that with FOS 6.4.1 setting it to long distance issues warnings about the VC Init setting and consequently the fill word. But those are only warnings. FOS 7.0.2 requires you to do modifications to VCI and the fillword for long distance links. Setting FOS 6.4.1 to the LS (long-distance static distance) setting with wrong VCI and fillword setting made the whole fabric inoperational (stuck in an SCN loop, use fabriclog -s to see, you don't see it anywhere else, no port error counters or anything increasing). Currently I'm giving the one fabric with the IMHO more correct settings a beating and it seems to do fine, whereas the other one without much traffic still has errors here and there. In short: We have eliminated the active part of the MUX (the FC retimer). We are putting the long distance SFPs into the end equipment themselves. Just to be sure we bought new monomode cables to connect the end equipment to the remaining passive part of the MUX. We are now trying out several long distance configs. It's almost black magic. Everything that happens is mostly empirical, noone seems to have a clue what are the exact reasons to do something. ("We have tried this, and it didn't work, then we tried that and it worked, so we stuck with that." But noone really seems to know why.) I'll keep you updated. Progress Report #2 We got the new lasers for one of the fabrics on warranty. It's ultra clean even on 4GbFC. They're transmitting with roughly 2mW (3dBm) whereas the others are only at 1.5mW (1.5dBm) although that should really be enough. The other fabric (where the lasers are apparently ok) still produces one or two CRCs infrequently. Using sfpshow the SFP producing the actual RX errors shows Status/Ctrl: 0x82 Alarm flags[0,1] = 0x5, 0x40 Warn Flags[0,1] = 0x5, 0x40 Now I'll have to find out what that means. Not sure if it was there before. Well I'll first clear my head with a week of vacation. 8-)

    Read the article

  • WPF ListView.CurrentChanged too fast for binding

    - by matt
    My case: MVVM ListView+Details(custom UserControl) List bound to MV.Items (IsSynchronizedWithCurrent=true) Details bound to MV.Items.Current MV.Items.Count == 100 about 0.2sec to read details (lazy mode) When I hold the down arrow on the list, very strange things happen: list items order change current changes in the random order CPU usage drastically increments and eventually all hangs. I've read some post that one should start the timer or run handler in the background, but I am not able to do that, since all the binding WPF does for me. Is there some way to instruct the binding in my DetailsControl, to wait a while before accepting CurrentItem? Or should I just resign from the clean solution and write custom code in my MV to handle that?

    Read the article

  • hackage package dependencies and future-proof libraries

    - by yairchu
    In the dependencies section of a cabal file: Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 Should I be doing something like Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 && < 0.3.0 (putting upper limits on versions of packages I depend on) or not? I'll use a real example: my "List" package on Hackage (List monad transformer and class) If I don't put the limit - my package could break by a change in "transformers" If I do put the limit - a user that uses "transformers" but is using a newer version of it will not be able to use lift and liftIO with ListT because it's only an instance of these classes of transformers-0.2.x I guess that applications should always put upper limits so that they never break, so this question is only about libraries: Shall I use the upper version limit on dependencies or not?

    Read the article

  • Using boost::asio::async_read with stdin?

    - by yeus
    hi poeple.. short question: I have a realtime-simulation which is running as a backround process and is connected with pipes to the calling pogramm. I want to send commands to that process using stdin to get certain information from it via stdout. Now because it is a real-time process, it has to be a non blocking input. Is boost::asio::async_read in conjunction with iostream::cin a good idea for this task? how would I use that function if it is feasible? Any more suggestions?

    Read the article

  • NSTimer to fire while device is locked

    - by edie
    Hi, I'm currently creating an alarm. I use NSTimer to schedule my alarms. My problem is when the device was put into locked mode my NSTimer doesn't fire. I think that the NSTimer will not fire because my app goes to suspended state when it is lock. Can you help me find a solution to my problem? I've found some topics about UIBackgroundModes, but I don't know how it will help me. Thanks.. The problem in UILocalNotification is when the device was in silent, the sound will not be hear. My implementation was I'm using NSTimer to fire an alarm when the app is in foreground or device is locked but app currently running. When the applicationDidEnterBackground: is called I schedule the UILocalNotification as the alarm.

    Read the article

  • Ruby - Possible to pass a block as a param as an actual block to another function?

    - by Markus O'Reilly
    This is what I'm trying to do: def call_block(in_class = "String", &block) instance = eval("#{in_class}.new") puts "instance class: #{instance.class}" instance.instance_eval{ block.call } end # --- TEST EXAMPLE --- # This outputs "class: String" every time "sdlkfj".instance_eval { puts "class: #{self.class}" } # This will only output "class: Object" every time # I'm trying to get this to output "class: String" though call_block("String") { puts "class: #{self.class}" } On the line where it says "instance.instance_eval{ block.call }", I'm trying to find another way to make the new instance variable run instance eval on the block. The only way I can think of to get it to do that is to pass instance_eval the original block, not as a variable or anything, but as a real block like in the test example. Any tips?

    Read the article

  • Problems with testing in app purchases

    - by sashaeve
    I am trying to test my application with in app purchases. I created features, test user, logged out from iTunes on the iPhone and used developer certificate. Load app from XCode in debug mode. When I click "Buy" button I pass all checks for internet availability, canMakePayments and call SKPayment *payment = [SKPayment paymentWithProductIdentifier:featureId]; [[SKPaymentQueue defaultQueue] addPayment:payment]; But all what I see is a pending view and after some minutes it failed in - (void) failedTransaction: (SKPaymentTransaction *)transaction { if (transaction.error.code != SKErrorPaymentCancelled) { NSLog(@"failedTransaction"); } [[MKStoreManager sharedManager] paymentCanceled]; [[SKPaymentQueue defaultQueue] finishTransaction: transaction]; } Please advice in what direction I should go to figure out the problem and what else I should check. P.S. All related questions on SO were checked with no luck.

    Read the article

  • Streaming a non-PCM WAV file to a SilverLight application

    - by Satumba
    Hi, I would like to allow users to play recorded WAV files that stored on a server back to a Silverlight application as a client to play them. I saw that there is a way to play a WAV file on Silverlight (here), but when i tried to impliment it, i got an error playing the file because it is not in PCM format but encoded. The files that i'm trying to play are encoded with a special encoder, so i thought that the only way is to decode the WAV file on the server and stream it back to the client. The limitation is that the decode process should occur in real time because it is not reasonable to convert all the WAV files that exists. Is it possible to do it? Which streamer can i use? (Windows Media Service can help here?) Does somebody has any experience with such a scenario? Appreciate your help.

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >