Search Results

Search found 35173 results on 1407 pages for 'critical path software'.

Page 653/1407 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • WPF - Transparency - Stream Desktop Content

    - by Niels Willems
    Greetings I'm in the process of making a Scoreboard for a game (Starcraft II). This scoreboard is being made as a WPF Application with a C# code-behind. I already have a version which works for 90% in WinForms but I lacked the support to easily make it look a lot nicer which are available in WPF. The point of this application will be to form a kind of overlay on top of a running game. This game is in Fulscreen(Windowed Mode) so when in WinForms I coded it so that it should always be on top. It would do so and that was no problem. Since the main look of the app in WPF is based on an image with a transparent background I have set most Background values to Transparent. However when I do this the entire application does not get registered by streaming software. For example it just shows my Desktop or the game I'm playing but not my application even though it IS there. I can see it with my own eyes but the audience on the stream cannot. Does anyone have any experience with this matter because it's really doing my head in. My entire application will be useless if it is not visible on streams. If I have to put the background on a color rather than transparent the UI will be completely demolished as well in terms of looks. I'm basically trying to make a game-overlay in C# & WPF. I have read you can do this on different ways as well but I have little to no knowledge of C++ nor do I know anything about DirectX Thank you for your time reading and your possible insights. Edit: The best solution would be an overlay similar to that one of Steam/Xfire/Dolby Axon. Edit 2: I've had no luck with all the suggestions so I basically made the transparent bits of my image non transparent and let the user decide which one to use depending on what streaming software they would be using.

    Read the article

  • Is There a Better Way to Feed Different Parameters into Functions with If-Statements?

    - by FlowofSoul
    I've been teaching myself Python for a little while now, and I've never programmed before. I just wrote a basic backup program that writes out the progress of each individual file while it is copying. I wrote a function that determines buffer size so that smaller files are copied with a smaller buffer, and bigger files are copied with a bigger buffer. The way I have the code set up now doesn't seem very efficient, as there is an if loop that then leads to another if loops, creating four options, and they all just call the same function with different parameters. import os import sys def smartcopy(filestocopy, dest_path, show_progress = False): """Determines what buffer size to use with copy() Setting show_progress to True calls back display_progress()""" #filestocopy is a list of dictionaries for the files needed to be copied #dictionaries are used as the fullpath, st_mtime, and size are needed if len(filestocopy.keys()) == 0: return None #Determines average file size for which buffer to use average_size = 0 for key in filestocopy.keys(): average_size += int(filestocopy[key]['size']) average_size = average_size/len(filestocopy.keys()) #Smaller buffer for smaller files if average_size < 1024*10000: #Buffer sizes determined by informal tests on my laptop if show_progress: for key in filestocopy.keys(): #dest_path+key is the destination path, as the key is the relative path #and the dest_path is the top level folder copy(filestocopy[key]['fullpath'], dest_path+key, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, callback = None) #Bigger buffer for bigger files else: if show_progress: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600) def display_progress(pos, total, filename): percent = round(float(pos)/float(total)*100,2) if percent <= 100: sys.stdout.write(filename + ' - ' + str(percent)+'% \r') else: percent = 100 sys.stdout.write(filename + ' - Completed \n') Is there a better way to accomplish what I'm doing? Sorry if the code is commented poorly or hard to follow. I didn't want to ask someone to read through all 120 lines of my poorly written code, so I just isolated the two functions. Thanks for any help.

    Read the article

  • cURL cookie negative cookie expire

    - by Joe Doe
    I have problems with cookies with cURL. After problems I turned on verbose function and figured out cURL sets them negative expire date even if server sends positive date. Example: * Added cookie _c_sess=""test"" for domain test.com, path /, expire -1630024962 < Set-Cookie: _c_sess="test"; Domain=test.com; HttpOnly; expires=Mon, 26-Mar-2012 14:52:47 GMT; Max-Age=1332773567; Path=/ As you can see both expires and max-age are positive, but cURL sets expire to negative value. Somebody has idea? EDIT: Here is php code I use. $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://site.com/"); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0'); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookiepath); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookiepath); curl_setopt($ch, CURLOPT_HEADER ,1); curl_setopt($ch, CURLOPT_VERBOSE ,1); curl_setopt($ch, CURLOPT_STDERR ,$f); curl_setopt($ch, CURLOPT_RETURNTRANSFER ,1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION ,1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); $data = curl_exec($ch); Data from cookie jar: #HttpOnly_.test.com TRUE / FALSE -1630016318 _test_sess "test"

    Read the article

  • It's easy to set a breakpoint when debugging VBA, but how about a "startpoint" or a "skippoint"?

    - by PowerUser
    I'm debugging a subroutine in my VBA code. I want to ignore the first half and just run the second half. So, is there a way to set a 'startpoint'? Also, is there an easy way to ignore a specific line of code other than commenting? If not, I'll just continue commenting out all the code I don't want run. The problem with this, of course, is that I have to remember to uncomment the critical code before I send it on to Production.

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • Why and when should one call _fpreset( )?

    - by STingRaySC
    The only documentation I can find (on MSDN or otherwise) is that a call to _fpreset() "resets the floating-point package." What is the "floating point package?" Does this also clear the FPU status word? I see documentation that says to call _fpreset() when recovering from a SIGFPE, but doesn't _clearfp() do this as well? Do I need to call both? I am working on an application that unmasks some FP exceptions (using _controlfp()). When I want to reset the FPU to the default state (say, when calling to .NET code), should I just call _clearfp(), _fpreset(), or both. This is performance critical code, so I don't want to call both if I don't have to...

    Read the article

  • Java - How to find the redirected url of a url?

    - by Yatendra Goel
    I am accessing web pages through java as follows: URLConnection con = url.openConnection(); But in some cases, a url redirects to another url. So I want to know the url to which the previous url redirected. Below are the header fields that I got as a response: null-->[HTTP/1.1 200 OK] Cache-control-->[public,max-age=3600] last-modified-->[Sat, 17 Apr 2010 13:45:35 GMT] Transfer-Encoding-->[chunked] Date-->[Sat, 17 Apr 2010 13:45:35 GMT] Vary-->[Accept-Encoding] Expires-->[Sat, 17 Apr 2010 14:45:35 GMT] Set-Cookie-->[cl_def_hp=copenhagen; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT, cl_def_lang=en; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT] Connection-->[close] Content-Type-->[text/html; charset=iso-8859-1;] Server-->[Apache] So at present, I am constructing the redirected url from the value of the Set-Cookie header field. In the above case, the redirected url is copenhagen.craigslist.org Is there any standard way through which I can determine which url the particular url is going to redirect. I know that when a url redirects to other url, the server sends an intermediate response containing a header field that tells the url which it is going to redirect but I am not receiving that intermediate response through the url.openConnection(); method.

    Read the article

  • Reasonably faster way to traverse a directory tree in Python?

    - by Sridhar Ratnakumar
    Assuming that the given directory tree is of reasonable size: say an open source project like Twisted or Python, what is the fastest way to traverse and iterate over the absolute path of all files/directories inside that directory? I want to do this from within Python (subprocess is allowed). os.path.walk is slow. So I tried ls -lR and tree -fi. For a project with about 8337 files (including tmp, pyc, test, .svn files): $ time tree -fi > /dev/null real 0m0.170s user 0m0.044s sys 0m0.123s $ time ls -lR > /dev/null real 0m0.292s user 0m0.138s sys 0m0.152s $ time find . > /dev/null real 0m0.074s user 0m0.017s sys 0m0.056s $ tree appears to be faster than ls -lR (though ls -R is faster than tree, but it does not give full paths). find is the fastest. Can anyone think of a faster and/or better approach? On Windows, I may simply ship a 32-bit binary tree.exe or ls.exe if necessary. Update 1: Added find

    Read the article

  • How to: Inline assembler in C++ (under Visual Studio 2010)

    - by toxic shock
    I'm writing a performance-critical, number-crunching C++ project where 70% of the time is used by the 200 line core module. I'd like to optimize the core using inline assembly, but I'm completely new to this. I do, however, know some x86 assembly languages including the one used by GCC and NASM. All I know: I have to put the assembler instructions in _asm{} where I want them to be. Problem: I have no clue where to start. What is in which register at the moment my inline assembly comes into play?

    Read the article

  • Cannot use Java 7 instalation if Java 8 is installed

    - by Sebastien Diot
    I normally still use Java 7 for all my coding projects (it's a company "politics" issue), but I installed Java 8 for one third-party project I am contributing to. Now, it seems I cannot have Java 8 installed in Windows 7 x64, and still use Java 7 by default: C:\>"%JAVA_HOME%\bin\java.exe" -version java version "1.7.0_55" Java(TM) SE Runtime Environment (build 1.7.0_55-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) C:\>java.exe -version java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) As you can see, JAVA_HOME is completely ignored. I also have Java in the path, using "%JAVA_HOME%\bin", which resolve correctly to Java 7 when I check the path in a DOS box, but it still makes no difference. I checked in the "Java Control Panel" (not sure if this affects the default command-line Java version). Under the "Java" tab, the "View..." button, you get to see "registered" Java versions. I can add all the versions under the "User" tab, but under "System" there is only Java 8, and no way to change it. Am I missing something, or did Oracle just make it impossible to use Java 7, unless I de-install Java 8? I don't want to have to specify the "source" and "target" everywhere, and I don't even know if it is possible for me to specify it everywhere, where Java is used. EDIT: What I did is I de-installed all Java. Then installed the latest Java7 (both 86 and x64), and then the latest Java8 (both 86 and x64). After I did that, I noticed that the x64 JDK was gone. It seems Java8 killed it. So I re-installed the JDK 7 x64, after the JDK 8 x64. Still, JDK7 x64 did not seem to "replace" the "java.exe" which is copied into the "Windows" directory itself (I assume THAT is the problem).

    Read the article

  • How can I eliminate latency in quicktime streamed video

    - by JJFeiler
    I'm prototyping a client that displays streaming video from a HaiVision Barracuda through a quicktime client. I've been unable to reduce the buffer size below 3.0 seconds... for this application, we need as low a latency as the network allows, and prefer video dropouts to delay. I'm doing the following: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { NSString *path = [[NSBundle mainBundle] pathForResource:@"haivision" ofType:@"sdp"]; NSError *error = nil; QTMovie *qtmovie = [QTMovie movieWithFile:path error:&error]; if( error != nil ) { NSLog(@"error: %@", [error localizedDescription]); } Movie movie = [qtmovie quickTimeMovie]; long trackCount = GetMovieTrackCount(movie); Track theTrack = GetMovieTrack(movie,1); Media theMedia = GetTrackMedia(theTrack); MediaHandler theMediaHandler = GetMediaHandler(theMedia); QTSMediaPresentationParams myPres; ComponentResult c = QTSMediaGetIndStreamInfo(theMediaHandler, 1,kQTSMediaPresentationInfo, &myPres); Fixed shortdelay = 1<<15; OSErr theErr = QTSPresSetInfo (myPres.presentationID, kQTSAllStreams, kQTSTargetBufferDurationInfo, &shortdelay ); NSLog(@"OSErr %d", theErr); [movieView setMovie:qtmovie]; [movieView play:self]; } I seem to be getting valid objects/structures all the way down to the QTSPres, though the ComponentResult and OSErr are both returning -50. The streaming video plays fine, but the buffer is still 3.0seconds. Any help/insight appreciated. J

    Read the article

  • Having trouble parsing XML with jQuery

    - by Jack
    Hi Guys, I'm trying to parse some XML data using jQuery, and as it stands I have extracted the 'ID' attribute of the required nodes and stored them in an array, and now I want to run a loop for each array member and eventually grab more attributes from the notes specific to each ID. The problem currently is that once I get to the 'for' loop, it isn't looping, and I think I may have written the xml path data incorrectly. It runs once and I recieve the 'alert(arrayIds.length);' only once, and it only loops the correct amount of times if I remove the subsequent xml path code. Here is my function: var arrayIds = new Array(); $(document).ready(function(){ $.ajax({ type: "GET", url: "question.xml", dataType: "xml", success: function(xml) { $(xml).find("C").each(function(){ $("#attr2").append($(this).attr('ID') + "<br />"); arrayIds.push($(this).attr('ID')); }); for (i=0; i<arrayIds.length; i++) { alert(arrayIds.length); $(xml).find("C[ID='arrayIds[i]']").(function(){ // pass values alert('test'); }); } } }); }); Any ideas?

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • Planning and coping with deadlines in SCRUM

    - by John
    From wikipedia: During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software. I was reading this and two questions immediately popped into my head: 1)If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it! 2)Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint? The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? kind of like a salesman promising something without consulting the developers.

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • Installing Plugins from Cloud p2 repository in Eclipse IDE

    - by user1495036
    I have been recently reading a lot about p2 for a requirement of mine. Most of the p2 documentation online points to p2 for RCP. My requirement is for a plugin repo. I have a plugin that is used within Eclipse IDE. I dnt want to change the repo location but based on the Eclipse Version, if the user looks for Install New Software or Check for Updates it needs to download the respective plugins. My repo currently contains all the plugins for all the versions. but i need to everytime give a different URL to my user based on the Version. For e.g i am using Eclipse 3.7(Indigo). I install the plugin thru Install New Software by adding the p2 Repo URL. Now the user decides to for some requirement move to Eclipse 3.6, I want him to connect to the same p2 Repo URL and download the plugins created for Eclipse 3.6. This is definitely possible using p2 Discovery, or i could categorize the downloads using composite repository but i dnt want to do any of these. Just want to kno is there any API that i can hold on to, so that before processing the URL and finding the updates, i can check the version of Eclipse and redirect it based on the version to an internal URL. This is possible in RCP, want to kno if i can do it in Eclipse p2 UI. All the p2 UI looks to be internal classes. Any directives would be appreciated. Malai

    Read the article

  • pythonpath issue? "python2.5: can't open file 'dev_appserver.py': [Errno 2] No such file or director

    - by Linc
    I added this line to my .bashrc (Ubuntu 9.10): export PYTHONPATH=/opt/google_appengine/ And then I ran the dev_appserver through python2.5 on Ubuntu like this: $ python2.5 dev_appserver.py guestbook/ python2.5: can't open file 'dev_appserver.py': [Errno 2] No such file or directory As you can see, it can't find dev_appserver.py even though it's in my /opt/google_appengine/ directory. Just to make sure it's not a permissions issue I did this: sudo chmod a+rwx dev_appserver.py To check whether it's been added to the system path for python2.5 I did this: $ python2.5 Python 2.5.5 (r255:77872, Apr 29 2010, 23:59:20) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> for line in sys.path: print line ... /usr/local/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg /opt/google_appengine/demos /opt/google_appengine /usr/local/lib/python25.zip ... The directory shows up in this list so I don't understand why it can't be found when I type: $ python2.5 dev_appserver.py guestbook/ I'm new to Python so I would appreciate any help. Thanks.

    Read the article

  • WPF CollectionViewSource Grouping

    - by Miles
    I'm using a Collection View Source to group my data. In my data, I have Property1 and Property2 that I'm needing to group on. The only stipulation is that I don't want sub-groups of another group. So, when I group by these two properties, I don't want to have it so that Property2 because a subgroup of Property1's group. The reason why I'm wanting this is because I'm wanting to have a header that shows the following information: Header: <TextBlock.Text> <MultiBinding StringFormat="Property1: {0}, Property2: {1}"> <Binding Path="Property1"/> <Binding Path="Property2"/> </MultiBinding> </TextBlock.Text> I've tried this with my CollectionViewSource but was not able to "combine" the group and subgroup together: <CollectionViewSource x:Key="myKey" Source="{Binding myDataSource}"> <CollectionViewSource.GroupDescriptions> <PropertyGroupDescription PropertyName="Property1" /> <PropertyGroupDescription PropertyName="Property2" /> </CollectionViewSource.GroupDescriptions> </CollectionViewSource> Is it possible to group two properties together? Something like below? <CollectionViewSource x:Key="myKey" Source="{Binding myDataSource}"> <CollectionViewSource.GroupDescriptions> <PropertyGroupDescription PropertyName="Property1,Property2" /> </CollectionViewSource.GroupDescriptions> </CollectionViewSource>

    Read the article

  • getting an exception when refreshing the configuration in memory on change to external config file

    - by RKP
    Hi, I have a windows service which reads the config settings from an external file which is located at a different path than the path to the executable for the windows service. the windows service uses a FileSystemWatcher to monitor the changes to the external config file and when it the config file is changed, it should refresh the settings in memory by reading the updated settings from the config file. but this is where I am getting an exception "ConfigurationErrorsException" and the message is "An error occurred creating the configuration section handler for appSettings: The process cannot access the file 'M:\somefolder\WindowsService1.Config' because it is being used by another process." and the inner exception is actually "IOException" with same message. here is the code. I am not sure what is wrong with the code. Please help. protected void watcher_Changed(object sender, FileSystemEventArgs e) { ConfigurationManager.RefreshSection(ConfigSectionName); WriteToEventLog(ConfigKeyCheck); if (FileChanged != null) FileChanged(this, EventArgs.Empty); } private void WriteToEventLog(string key) { if (EventLog.SourceExists(ServiceEventSource)) { EventLog.WriteEntry(ServiceEventSource, string.Format("key:{0}, value:{1}", key, ConfigurationManager.AppSettings[key])); } }

    Read the article

  • Backup Google Calendar programmatically: https://www.google.com/calendar/exporticalzip

    - by Michael
    I'm struggling with writing a python script that automatically grabs the zip fail containing all my google calendars and stores it (as a backup) on my harddisk. I'm using ClientLogin to get an authentication token (and successfully can obtain the token). Unfortunately, i'm unable to retrieve the file at https://www.google.com/calendar/exporticalzip It always asks me for the login credentials again by returning a login page as html (instead of the zip). Here's the critical code: post_data = post_data = urllib.urlencode({ 'auth': token, 'continue': zip_url}) request = urllib2.Request('https://www.google.com/calendar', post_data, header) try: f = urllib2.urlopen(request) result = f.read() except: print "Error" Anyone any ideas or done that before? Or an alternative idea how to backup all my calendars (automatically!)

    Read the article

  • rsync useful w/ encrypted files?

    - by barrycarter
    Is rsync efficient for transferring encrypted files? More specifically: I encrypt 'x' with my public key and call the result 'y'. I rsync 'y' to my backup server. 'x' changes slightly I encrypt the modified 'x' and rsync the modified 'y' to my backup server. Is this efficient? I know a small change in 'x' yields a large change in 'y', but is the change localized? Or has 'y' changed so thoroughly that rsync is not much better than scp? I currently backup my "critical" files by tarring/bzipping them nightly, then encrypting the .tar.bz file and rsync'ing it to my backup server. Many of the individual files don't change, but, of course, the tar file changes if even one of the files change. Is this efficient? Should I be encrypting and backing up each file individually? That way, unchanged files will take no time to rsync.

    Read the article

  • Could not load SWT library on Windows 32-bit

    - by Firzen
    I am almost done with one Java project that I have been developing on Linux. Now I need to build and test it on Windows. So I have installed Eclipse on Windows XP 32-bit, and imported my project. All dependencies of project are in jar files in lib folder, and on Linux everything works well, but on Windows XP I get following error: Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-pi-gtk-4234 in java.library.path no swt-pi-gtk in java.library.path Can't load library: C:\Documents and Settings\firzen\.swt\lib\win32\x86\swt-pi-gtk-4234.dll Can't load library: C:\Documents and Settings\firzen\.swt\lib\win32\x86\swt-pi-gtk.dll at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.gtk.OS.<clinit>(OS.java:22) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at gui.Frontend.<init>(Frontend.java:51) at Fighter.main(Fighter.java:18) I have searched for these DLLs, but I have failed to find them. Where can I download these DLL files? Thanks in advance.

    Read the article

  • PHP file outside doc root needs files outside and inside the document root

    - by jax
    I have a library of classes, all interrelated. Some files are inside the document root and some are outside using the <Directory> and Alias features in httpd.conf Assuming I have 3 files: webroot.php (Inside the document root) alias_directory.php (Inside a folder outside the doc root) alias_directory2.php (Inside a **different** folder outside the doc root) If alias_directory2.php needs both webroot.php and alias_directory.php, This does not work. (Remember alias_directory.php and alias_directory2.php are not in the same locations) require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once $_SERVER['DOCUMENT_ROOT'].'/alias_directory.php'; //(not ok) This does not work because alias_directory.php is not in the doc root. Similarly require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once dirname(__FILE__).'/alias_directory.php'; //(not ok) The problem here is that dirname(__FILE__) will return the path for alias_directory2.php not alias_directory.php. This works: require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once '/full/path/to/directory/alias_directory.php'; //(ok) But is very nasty and is a maintenance nightmare if I decide to move my library to another location. How do I solve this problem, is seems that I need a way to resolve an Alias folder properly.

    Read the article

  • WPF - Dynamically access a specific item of a collection in XAML

    - by Andy T
    Hi, I have a data source ('SampleAppearanceDefinitions'), which holds a single collection ('Definitions'). Each item in the collection has several properties, including Color, which is what I'm interested in here. I want, in XAML, to display the Color of a particular item in the collection as text. I can do this just fine using this code below... Text="{Binding Source={StaticResource SampleAppearanceDefinitions}, Path=Definitions[0].Color}" The only problem is, this requires me to hard-code the index of the item in the Definitions collection (I've used 0 in the example above). What I want to do in fact is to get that value from a property in my current DataContext ('AppearanceID'). One might imagine the correct code to look like this.... Text="{Binding Source={StaticResource SampleAppearanceDefinitions}, Path=Definitions[{Binding AppearanceID}].Color}" ...but of course, this is wrong. Can anyone tell me what the correct way to do this is? Is it possible in XAML only? It feels like it ought to be, but I can't work out or find how to do it. Any help would be greatly appreciated! Thanks! AT

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >