Search Results

Search found 23545 results on 942 pages for 'parallel task library'.

Page 874/942 | < Previous Page | 870 871 872 873 874 875 876 877 878 879 880 881  | Next Page >

  • Compiling scalafx for Java 7u7 (that contains JavaFX 2.2) on OS X

    - by akauppi
    The compilation instructions of scalafx says to do: export JAVAFX_HOME=/Path/To/javafx-sdk2.1.0-beta sbt clean compile package make-pom package-src However, with the new packaging of JavaFX as part of the Java JDK itself (i.e. 7u7 for OS X) there no longer seems to be such a 'javafx-sdkx.x.x' folder. The Oracle docs say that JavaFX JDK is placed alongside the main Java JDK (in same folders). So I do: $ export JAVAFX_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk $ sbt clean [warn] Using project/plugins/ (/Users/asko/Sources/scalafx/project/plugins) for plugin configuration is deprecated. [warn] Put .sbt plugin definitions directly in project/, [warn] .scala plugin definitions in project/project/, [warn] and remove the project/plugins/ directory. [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins/project [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins [error] java.lang.NullPointerException [error] Use 'last' for the full log. Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? Am I doing something wrong or is scalafx not yet compatible with the latest Java release (7u7, JavaFX 2.2). What can I do? http://code.google.com/p/scalafx/ Addendum ..and finally (following Igor's solution below) sbt run launches the colorful circles demo easily (well, if one has a supported GPU that is). Oracle claims that "JavaFX supports graphic hardware acceleration on any Mac OS X system that is Lion or later" but I am inclined to think the NVidia powered Mac Mini I'm using does software rendering. A recent MacBook Air (core i7) is a complete different beast! :)

    Read the article

  • using facelet1.1.15 (external facelet) in JSF2

    - by Odelya
    Hi! I have upgrated to JSF2 but still running with facelet1.1.15. I have these parameters in web.xml: <context-param> <param-name>org.ajax4jsf.VIEW_HANDLERS</param-name> <param-value>com.sun.facelets.FaceletViewHandler</param-value> </context-param> <context-param> <param-name>javax.faces.DISABLE_FACELET_JSF_VIEWHANDLER</param-name> <param-value>true</param-value> </context-param> I am trying to create my own componet step by step of this example : http://www.ibm.com/developerworks/java/library/j-jsf2fu2/index.html#tip3 everything looks fine but i get an error that it doesn't recognize the tag. Has it got to do with the facelet 1.1.15? and it works only with VDL? it there a way to use 1.1.15 and custom components in JSF2? As well - I use tomcat 6

    Read the article

  • Problems with Getting Remote Contents using Google App Engine

    - by dade
    Here is the client side code. It is running insdide a Google Gadgets var params = {}; params[gadgets.io.RequestParameters.CONTENT_TYPE] = gadgets.io.ContentType.JSON; var url = "http://invplatformtest.appspot.com/getrecent/"; gadgets.io.makeRequest(url, response, params); The response function is: function response(obj) { var r = obj.data; alert(r['name']); } while on the server end, the python code sending the JSON is: class GetRecent(webapp.RequestHandler): def get(self): self.response.out.write({'name':'geocities'}) #i know this is where the problem is so how do i encode json in GAE? which is just supposed to send back a Json encoded string but when i run this, the javascript throws the following error: r is null alert(r['name']); If i were recieving just TEXT contents and my server send TEXT everything works fine. I only get this problem when am trying to send JSON. Where exactly is the problem? Am i encoding the JSON the wrong way on AppEngine? I tried using the JSON library but it looks as if this is not supported. Where is the problem exactly? :(

    Read the article

  • Using stringstream instead of `sscanf` to parse a fixed-format string

    - by John Dibling
    I would like to use the facilities provided by stringstream to extract values from a fixed-format string as a type-safe alternative to sscanf. How can I do this? Consider the following specific use case. I have a std::string in the following fixed format: YYYYMMDDHHMMSSmmm Where: YYYY = 4 digits representing the year MM = 2 digits representing the month ('0' padded to 2 characters) DD = 2 digits representing the day ('0' padded to 2 characters) HH = 2 digits representing the hour ('0' padded to 2 characters) MM = 2 digits representing the minute ('0' padded to 2 characters) SS = 2 digits representing the second ('0' padded to 2 characters) mmm = 3 digits representing the milliseconds ('0' padded to 3 characters) Previously I was doing something along these lines: string s = "20101220110651184"; unsigned year = 0, month = 0, day = 0, hour = 0, minute = 0, second = 0, milli = 0; sscanf(s.c_str(), "%4u%2u%2u%2u%2u%2u%3u", &year, &month, &day, &hour, &minute, &second, &milli ); The width values are magic numbers, and that's ok. I'd like to use streams to extract these values and convert them to unsigneds in the interest of type safety. But when I try this: stringstream ss; ss << "20101220110651184"; ss >> setw(4) >> year; year retains the value 0. It should be 2010. How do I do what I'm trying to do? I can't use Boost or any other 3rd party library, nor can I use C++0x.

    Read the article

  • Am I doing AS3 reference cleanup correctly?

    - by Ólafur Waage
    In one frame of my fla file (let's call it frame 2), I load a few xml files, then send that data into a class that is initialized in that frame, this class creates a few timers and listeners. Then when this class is done doing it's work. I call a dispatchEvent and move to frame 3. This frame does some things as well, it's initialized and creates a few event listeners and timers. When it's done, I move to frame 2 again. This is supposed to repeat as often as I need so I need to clean up the references correctly and I'm wondering if I'm doing it correctly. For sprites I do this. world.removeChild(Background); // world is the parent stage Background = null; For instances of other classes I do this. Players[i].cleanUp(world); // do any cleanup within the instanced class world.removeChild(PlayersSelect[i]); For event listeners I do this. if(Background != null) { Background.removeEventListener(MouseEvent.CLICK, deSelectPlayer); } For timers I do this. if(Timeout != null) { Timeout.stop(); Timeout.removeEventListener(TimerEvent.TIMER, queueHandler); Timeout.removeEventListener(TimerEvent.TIMER_COMPLETE, queueCompleted); Timeout = null; } And for library images I do this if(_libImage!= null) { s.removeChild(Images._libImage); // s is the stage _libImage= null; } And for the class itself in the main timeline, I do this Frame2.removeEventListener("Done", IAmDone); Frame2.cleanUp(); // the cleanup() does all the stuff above Frame2= null; Even if I do all this, when I get to frame 2 for the 2nd time, it runs for 1-2 seconds and then I get a lot of null reference errors because the cleanup function is called prematurely. Am I doing the cleanup correctly? What can cause events to fire prematurely?

    Read the article

  • jquery question

    - by OM The Eternity
    I am using the Jquery library to copy the text enter in one textfield to another textfield using a check box when clicked.. which is as follows <html> <head> <script src="js/jquery.js" ></script> </head> <body> <form> <input type="text" name="startdate" id="startdate" value=""/> <input type="text" name="enddate" id="enddate" value=""/> <input type="checkbox" name="checker" id="checker" /> </form> <script> $(document).ready(function(){ $("input#checker").bind("click",function(o){ if($("input#checker:checked").length){ $("#enddate").val($("#startdate").val()); }else{ $("#enddate").val(""); } }); } ); </script> </body> </html> now here i want the check box to be selected by default, so that the data entered in start date get copied automatically as check box is checked by default... so what event should be called here in jquery script? please help me in resolving this issue...

    Read the article

  • R: building a simple command line plotting tool/Capturing window close events

    - by user275455
    I am trying to use R within a script that will act as a simple command line plot tool. I.e. user pipes in a csv file and they get a plot. I can get to R fine and get the plot to display through various temp file machinations, but I have hit a roadblock. I cannot figure out how to get R to keep running until the users closes the window. If I plot and exit, the plot disappears immediately. If I plot and use some kind of infinite loop, the user cannot close the plot; he must exit by using an interrupt which I don't like. I see there is a getGraphicsEvent function, but it claims that the device is not supported (X11). Anyway, it doesn't appear to actually support an onClose event, only onMouseDown. Any ideas on how to solve this? edit: Thanks to Dirk for the advice to check out the tk interface. Here is the test code that works: require(tcltk) library(tkrplot) ##function to display plot, called by tkrplot and embedded in a window plotIt<-function(){ plot(x=1:10, y=1:10) } ##create top level window tt<-tktoplevel() ##variable to wait on like a condition variable, to be set by event handler done <- tclVar(0) ##bind to the window destroy event, set done variable when destroyed tkbind(tt,"",function() tclvalue(done) <- 1) ##Have tkrplot embed the plot window, then realize it with tkgrid tkgrid(tkrplot(tt,plotIt)) ##wait until done is true tkwait.variable(done)

    Read the article

  • Loading datasets from datastore and merge into single dictionary. Resource problem.

    - by fredrik
    Hi, I have a productdatabase that contains products, parts and labels for each part based on langcodes. The problem I'm having and haven't got around is a huge amount of resource used to get the different datasets and merging them into a dict to suit my needs. The products in the database are based on a number of parts that is of a certain type (ie. color, size). And each part has a label for each language. I created 4 different models for this. Products, ProductParts, ProductPartTypes and ProductPartLabels. I've narrowed it down to about 10 lines of code that seams to generate the problem. As of currently I have 3 Products, 3 Types, 3 parts for each type, and 2 languages. And the request takes a wooping 5500ms to generate. for product in productData: productDict = {} typeDict = {} productDict['productName'] = product.name cache_key = 'productparts_%s' % (slugify(product.key())) partData = memcache.get(cache_key) if not partData: for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } ## Start of problem lines ## for defaultPart in product.defaultPartsData: for label in labelsForLangCode: if label.key() in defaultPart.partLabelList: typeDict[defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in product.optionalPartsData: for label in labelsForLangCode: if label.key() in optionalPart.partLabelList: typeDict[optionalPart.type.typeId]['optional'].append(label.partLangLabel) ## end problem lines ## memcache.add(cache_key, typeDict, 500) partData = memcache.get(cache_key) productDict['parts'] = partData productList.append(productDict) I guess the problem lies in the number of for loops is too many and have to iterate over the same data over and over again. labelForLangCode get all labels from ProductPartLabels that match the current langCode. All parts for a product is stored in a db.ListProperty(db.key). The same goes for all labels for a part. The reason I need the some what complex dict is that I want to display all data for a product with it's default parts and show a selector for the optional one. The defaultPartsData and optionaPartsData are properties in the Product Model that looks like this: @property def defaultPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.defaultParts) @property def optionalPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.optionalParts) When the completed dict is in the memcache it works smoothly, but isn't the memcache reset if the application goes in to hibernation? Also I would like to show the page for first time user(memcache empty) with out the enormous delay. Also as I said above, this is only a small amount of parts/product. What will the result be when it's 30 products with 100 parts. Is one solution to create a scheduled task to cache it in the memcache every hour? It this efficient? I know this is alot to take in, but I'm stuck. I've been at this for about 12 hours straight. And can't figure out a solution. ..fredrik

    Read the article

  • WPF ObservableCollection in xaml

    - by Cloverness
    Hi, I have created an ObservableCollection in the code behind of a user control. It is created when the window loads: private void UserControl_Loaded(object sender, RoutedEventArgs e) { Entities db = new Entities(); ObservableCollection<Image> _imageCollection = new ObservableCollection<Image>(); IEnumerable<library> libraryQuery = from c in db.ElectricalLibraries select c; foreach (ElectricalLibrary c in libraryQuery) { Image finalImage = new Image(); finalImage.Width = 80; BitmapImage logo = new BitmapImage(); logo.BeginInit(); logo.UriSource = new Uri(c.url); logo.EndInit(); finalImage.Source = logo; _imageCollection.Add(finalImage); } } I need to get the ObservableCollection of images which are created based on the url saved in a database. But I need a ListView or other ItemsControl to bind to it in XAML file like this: But I can't figure it out how to pass the ObservableCollection to the ItemsSource of that control. I tried to create a class and then create an instance of a class in xaml file but it did not work. Should I create a static resource somehow Any help will be greatly appreciated.

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • What is wrong with this attempt of sending a break-signal?

    - by Jook
    I have quite a headache about this seemingly easy task: send a break signal to my device, like the wxTerm (or any similar Terminal application) does. This signal has to be 125ms long, according to my tests and the devices specification. It should result in a specific response, but what I get is a longer response than expected, and the transmitted date is false. e.g.: what it should respond 08 00 81 00 00 01 07 00 what it does respond 08 01 0A 0C 10 40 40 07 00 7F What really boggles me is, that after I have used wxTerm to look at my available com-ports (without connecting or sending anything), my code starts to work! I can send then as many breaks as I like, I get my response right from then on. I have to reset my PC in order to try it again. What the heck is going on here?! Here is my code for a reset through a break-signal: minicom_client(boost::asio::io_service& io_service, unsigned int baud, const string& device) : active_(true), io_service_(io_service), serialPort(io_service, device) { if (!serialPort.is_open()) { cerr << "Failed to open serial port\n"; return; } boost::asio::serial_port_base::flow_control FLOW( boost::asio::serial_port_base::flow_control::hardware ); boost::asio::serial_port_base::baud_rate baud_option(baud); serialPort.set_option(FLOW); serialPort.set_option(baud_option); read_start(); std::cout << SetCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::ptime mst1 = boost::posix_time::microsec_clock::local_time(); boost::this_thread::sleep(boost::posix_time::millisec(125)); boost::posix_time::ptime mst2 = boost::posix_time::microsec_clock::local_time(); std::cout << ClearCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::time_duration msdiff = mst2 - mst1; std::cout << msdiff.total_milliseconds() << std::endl; } Edit: It was only necessary to look at the combo-box selection of com-ports of wxTerm - no active connection was needed to be established in order to make my code work. I am guessing, that there is some sort of initialisation missing, which is done, when wxTerm is creating the list for the serial-port combo-box.

    Read the article

  • CREATE VIEW called multiple times not creating all views

    - by theninepoundhammer
    Noticing strange behavior in SQL 2005, both Express and Enterprise Edition: In my code I need to loop through a series of values (about five in a row), and for each value, I need to insert the value into a table and dynamically create a new view using that value as part of the where clause and the name of the view. The code runs pretty quickly, but what I'm noticing is that all the values are inserted into the table correctly but only the LAST view is being created. Every time. For example, if the values I'm using are X1, X2, X3, X4, and X5, I'll run the process, open up Mgmt Studio, and see five rows in the table with the correct five values, but only one view named MyView_x5 that has the correct WHERE clause. At first, I had this loop in an SSIS package as part of a larger data flow. When I started noticing this behavior, I created a stored proc that would create the CREATE VIEW statement dynamically after the insert and called EXECUTE to create the view. Same result. Finally, I created some C# code using the Enterprise Library DAAB, and did the insert and CREATE VIEW statements from my DLL. Same result every time. Most recently, I turned on Profiler while running against the Enterprise Edition and was able to verify that the Batch Started and Batch Completed events were being fired off for each instance of the view. However, like I said, only the last view is actually being created. Does anyone have any idea why this might be happening? Or any suggestions about what else to check or profile? I've profiled for error messages, exceptions, etc. but don't see any in my trace file. My express edition is 9.00.1399.06. Not sure about the Enterprise edition but think it is SP2.

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • C++ : integer constant is too large for its type

    - by user38586
    I need to bruteforce a year for an exercise. The compiler keep throwing this error: bruteforceJS12.cpp:8:28: warning: integer constant is too large for its type [enabled by default] My code is: #include <iostream> using namespace std; int main(){ unsigned long long year(0); unsigned long long result(318338237039211050000); unsigned long long pass(1337); while (pass != result) { for (unsigned long long i = 1; i<= year; i++) { pass += year * i * year; } cout << "pass not cracked with year = " << year << endl; ++year; } cout << "pass cracked with year = " << year << endl; } Note that I already tried with unsigned long long result(318338237039211050000ULL); I'm using gcc version 4.8.1 EDIT: Here is the corrected version using InfInt library http://code.google.com/p/infint/ #include <iostream> #include "InfInt.h" using namespace std; int main(){ InfInt year = "113"; InfInt result = "318338237039211050000"; InfInt pass= "1337"; while (pass != result) { for (InfInt i = 1; i<= year; i++) { pass += year * i * year; } cout << "year = " << year << " pass = " << pass << endl; ++year; } cout << "pass cracked with year = " << year << endl; }

    Read the article

  • Should we retire the term "Context"?

    - by MrGumbe
    I'm not sure if there is a more abused term in the world of programming than "Context." A word that has a very clear meaning in the English language has somehow morphed into a hot mess in software development, where the definition where the connotation can be completely different based on what library you happen to be developing in. Tomcat uses the word context to mean the configuration of a web application. Java applets, on the other hand, use an AppletContext to define attributes of the browser and HTML tag that launched it, but the BeanContext is defined as a container. ASP.NET uses the HttpContext object as a grab bag of state - containing information about the current request / response, session, user, server, and application objects. Context Oriented Programming defines the term as "Any information which is computationally accessible may form part of the context upon which behavioral variations depend," which I translate as "anything in the world." The innards of the Windows OS uses the CONTEXT structure to define properties about the hardware environment. The .NET installation classes, however, use the InstallContext property to represent the command line arguments entered to the installation class. The above doesn't even touch how all of us non-framework developers have used the term. I've seen plenty of developers fall into the subconscious trap of "I can't think of anything else to call this class, so I'll name it 'WidgetContext.'" Do you all agree that before naming our class a "Context," we may want to first consider some more descriptive terms? "Environment", "Configuraton", and "ExecutionState" come readily to mind.

    Read the article

  • Convert a image to a monochrome byte array

    - by Scott Chamberlain
    I am writing a library to interface C# with the EPL2 printer language. One feature I would like to try to implement is printing images, the specification doc says p1 = Width of graphic Width of graphic in bytes. Eight (8) dots = one (1) byte of data. p2 = Length of graphic Length of graphic in dots (or print lines) Data = Raw binary data without graphic file formatting. Data must be in bytes. Multiply the width in bytes (p1) by the number of print lines (p2) for the total amount of graphic data. The printer automatically calculates the exact size of the data block based upon this formula. I plan on my source image being a 1 bit per pixel bmp file, already scaled to size. I just don't know how to get it from that format in to a byte[] for me to send off to the printer. I tried ImageConverter.ConvertTo(Object, Type) it succeeds but the array it outputs is not the correct size and the documentation is very lacking on how the output is formatted. My current test code. Bitmap i = (Bitmap)Bitmap.FromFile("test.bmp"); ImageConverter ic = new ImageConverter(); byte[] b = (byte[])ic.ConvertTo(i, typeof(byte[])); Any help is greatly appreciated even if it is in a totally different direction.

    Read the article

  • ZEND - Creating custom routes without overwriting the default ones

    - by Pedro Cordeiro
    I'm trying to create something that looks like facebook's profile URL (http://facebook.com/username). So, at first I tried something like that: $router->addRoute( 'eventName', new Zend_Controller_Router_Route( '/:eventName', array( 'module' => 'default', 'controller' => 'event', 'action' => 'detail' ) ) ); I kept getting the following error: Fatal error: Uncaught exception 'Zend_Controller_Router_Exception' with message 'eventName is not specified' in /var/desenvolvimento/padroes/zf/ZendFramework-1.12.0/library/Zend/Controller/Plugin/Broker.php on line 336 Not only I was unable to make that piece of code work, all my default routes were (obviously) overwritten. So I have, for example, stuff like "mydomain.com/admin", that was now returning the same error (as it fell in the same pattern as /:eventName). What I need to do is to create this custom route, without overwriting the default ones and actually working (dûh). I have already checked the online docs and a lot (A LOT) of stuff on google, but I didn't find anything related to the error I'm getting or how to not overwrite the default routes. I'd appreciate anything that could point me the right direction. Thanks.

    Read the article

  • Unexpected output using subprocess in Python

    - by Vic
    I am trying to run a shell command from within my Python (version 2.6.5) code, but it is generating different output than the same command run within the shell (bash): bash: ~> ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//' 192.168.1.10 Python: >>> def get_ip(): ... cmd_string = "ifconfig eth0 | sed -rn \'s/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//\'" ... process = subprocess.Popen(cmd_string, shell=True, stdout=subprocess.PIPE) ... out, err = process.communicate() ... return out ... >>> get_ip() '\x01\n' My guess is that I need to escape the quotes somehow when running in python, but I am not sure how to go about this. NOTE: I cannot install additional modules or update python on the machine that this code needs to be run on. It needs to work as-is with Python 2.6.5 and the standard library.

    Read the article

  • Compile for mixed platform (32, 64) and reference a 32 or 64 bit DLL resolved at runtime

    - by Nigel Aston
    Using VS2010 under windows 32 or 64 bit. Our C# app calls a 3rd party DLL (managed) that interfaces to an unmanaged DLL. The 3rd party DLL API appears identical in 32 or 64 bit although underneath it links to a 32 or 64 bit unmanaged DLL. We want our C# app to run on either 32 or 64 bit OS, ideally it will auto detect the OS and load the appropriate 32rd party DLL - via a simple factory class which tests the Enviroment. So the neatest solution would be a runtime folder containing: OurApp.exe 3rdParty32.DLL 3rdPartyUnmanaged32.DLL 3rdParty64.DLL 3rdPartyUnmanaged64.DLL However, the interface for the managed 3rdParty 32 and 64 dll is identical so both cannot be referenced within the same VS2010 project: when adding the second the warning triangle is shown and it does not get referenced. Is my only answer to create two extra library DLL projects to reference the 3rdParty 32 and 64 Dlls? So I would end up with this project arrangement: Project 1: Builds OurApp.exe, dynamically creates an object for project2 or project3. Project 2: Builds OurApp32.DLL which references 3rdParty32.dll Project 3: Builds OurApp64.DLL which references 3rdParty64.dll

    Read the article

  • Run code before class instanciation in ActionScript 3

    - by soow.fr
    I need to run code in a class declaration before its instanciation. This would be especially useful to automatically register classes in a factory. See: // Main.as public class Main extends Sprite { public function Main() : void { var o : Object = Factory.make(42); } } // Factory.as public class Factory { private static var _factory : Array = new Array(); public static function registerClass(id : uint, c : Class) : void { _factory[id] = function () : Object { return new c(); }; } public static function make(id : uint) : Object { return _factory[id](); } } // Foo.as public class Foo { // Run this code before instanciating Foo! Factory.registerClass(42, Foo); } AFAIK, the JIT machine for the ActionScript language won't let me do that since no reference to Foo is made in the Main method. The Foo class being generated, I can't (and don't want to) register the classes in Main: I'd like to register all the exported classes in a specific package (or library). Ideally, this would be done through package introspection, which doesn't exist in ActionScript 3. Do you know any fix (or other solution) to my design issue?

    Read the article

  • How to Create the Upload File for Application Loader?

    - by Ohad Regev
    When I use Application Loader, I get to the point where it asks me to "Choose..." the file to be uploaded. If I understand correctly, it supposes to be the appName.app file I see under "Products" on my app bundle (I right click it and select "Show in Finder" to get to the specific file in library; then I'm supposed to ZIP it and the ZIP file is what I will choose in Application Loader). First, am I correct with this assumption? if yes... What should I define different in XCode than the way I used to build the application for testing (on simulator and on my personal iPhone)? Should I change the Info---Command-line build use from Debug to Release? How should I define the Build Settings---Code Signing section (in which field should I select the "iPhone Developer" option and in which should it be "iPhone Distribution")? Are there any other important Info/Build Settings/p.list/etc... fields I should relate to? any help will be appreciated...

    Read the article

  • Facebook dialog appears but always says "An error has occurred"

    - by Conor James
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> ... <head> ... <script src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script> </head> <body> <div id="fb-root"></div> <fb:like id="fb-like" href="http://www.example.com" layout="button_count" show_faces="false" width="100"></fb:like> ... myscript.js: FB.ui( { method: 'stream.publish', message: 'getting educated about Facebook Connect', attachment: { name: 'Connect', caption: 'The Facebook Connect JavaScript SDK', description: ( 'A small JavaScript library that allows you to harness ' + 'the power of Facebook, bringing the user\'s identity, ' + 'social graph and distribution power to your site.' ), href: 'http://github.com/facebook/connect-js' }, action_links: [ { text: 'Code', href: 'http://github.com/facebook/connect-js' } ], user_message_prompt: 'Share your thoughts about Connect' }, function(response) { if (response && response.post_id) { alert('Post was published.'); } else { alert('Post was not published.'); } } ); I have Facebook like button on my page which works fine. But when I call the FB.ui method above from my JavaScript source, the Facebook dialog pops up but displays this error message: **An error occurred. Please try again later.** This has happened repeatedly for two days since I started trying to implement it. Not a very helpful error message. Any idea what might cause it or how to narrow down the problem?

    Read the article

  • Prototyping Qt/C++ in Python

    - by tstenner
    I want to write a C++ application with Qt, but build a prototype first using Python and then gradually replace the Python code with C++. Is this the right approach, and what tools (bindings, binding generators, IDE) should I use? Ideally, everything should be available in the Ubuntu repositories so I wouldn't have to worry about incompatible or old versions and have everything set up with a simple aptitude install. Is there any comprehensive documentation about this process or do I have to learn every single component, and if yes, which ones? Right now I have multiple choices to make: Qt Creator, because of the nice auto completion and Qt integration. Eclipse, as it offers support for both C++ and Python. Eric (haven't used it yet) Vim PySide as it's working with CMake and Boost.Python, so theoretically it will make replacing python code easier. PyQt as it's more widely used (more support) and is available as a Debian package. Edit: As I will have to deploy the program to various computers, the C++-solution would require 1-5 files (the program and some library files if I'm linking it statically), using Python I'd have to build PyQt/PySide/SIP/whatever on every platform and explain how to install Python and everything else.

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

  • Why did File::Find finish short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system. Here is the meat of the script; I don't think including DBI stuff would be relevant since I reran the script with nothing but a counter in process_img() which returned the same number. find(\&process_img, $path_from); sub process_img { eval { return if ($_ eq "." or $_ eq ".."); ## Omitted querying and composing new paths for brevity. make_path("$path_to\\img\\$dir_area\\$dir_address\\$type"); copy($File::Find::name, "$path_to\\img\\$dir_area\\$dir_address\\$type\\$new_name"); }; if ($@) { print STDERR "eval barks: $@\n"; return } } And here is another method I used to count files: count_images($path_from); sub count_images { my $path = shift; opendir my $images, $path or die "died opening $path"; while (my $item = readdir $images) { next if $item eq '.' or $item eq '..'; $img_counter++ && next if -f "$path/$item"; count_images("$path/$item") if -d "$path/$item"; } closedir $images or die "died closing $path"; } print $img_counter;

    Read the article

< Previous Page | 870 871 872 873 874 875 876 877 878 879 880 881  | Next Page >