Search Results

Search found 13862 results on 555 pages for 'questions'.

Page 459/555 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Comparing dicts and update a list of result

    - by lmnt
    Hello, I have a list of dicts and I want to compare each dict in that list with a dict in a resulting list, add it to the result list if it's not there, and if it's there, update a counter associated with that dict. At first I wanted to use the solution described at http://stackoverflow.com/questions/1692388/python-list-of-dict-if-exists-increment-a-dict-value-if-not-append-a-new-dict but I got an error where one dict can not be used as a key to another dict. So the data structure I opted for is a list where each entry is a dict and an int: r = [[{'src': '', 'dst': '', 'cmd': ''}, 0]] The original dataset (that should be compared to the resulting dataset) is a list of dicts: d1 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} d2 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'} d3 = {'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'} d4 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} o = [d1, d2, d3, d4] The result should be: r = [[{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'}, 2], [{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'}, 1], [{'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'}, 1]] What is the best way to accomplish this? I have a few code examples but none is really good and most is not working correctly. Thanks for any input on this! UPDATE The final code after Tamås comments is: from collections import namedtuple, defaultdict DataClass = namedtuple("DataClass", "src dst cmd") d1 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') d2 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd2') d3 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1') d4 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') ds = d1, d2, d3, d4 r = defaultdict(int) for d in ds: r[d] += 1 print "list to compare" for d in ds: print d print "result after merge" for k, v in r.iteritems(): print("%s: %s" % (k, v))

    Read the article

  • c# delegate and abstract class

    - by BeraCim
    Hi all: I currently have 2 concrete methods in 2 abstract classes. One class contains the current method, while the other contains the legacy method. E.g. // Class #1 public abstract class ClassCurrent<T> : BaseClass<T> where T : BaseNode, new() { public List<T> GetAllRootNodes(int i) { //some code } } // Class #2 public abstract class MyClassLegacy<T> : BaseClass<T> where T : BaseNode, new() { public List<T> GetAllLeafNodes(int j) { //some code } } I want the corresponding method to run in their relative scenarios in the app. I'm planning to write a delegate to handle this. The idea is that I can just call the delegate and write logic in it to handle which method to call depending on which class/project it is called from (at least thats what I think delegates are for and how they are used). However, I have some questions on that topic (after some googling): 1) Is it possible to have a delegate that knows the 2 (or more) methods that reside in different classes? 2) Is it possible to make a delegate that spawns off abstract classes (like from the above code)? (My guess is a no, since delegates create concrete implementation of the passed-in classes) 3) I tried to write a delegate for the above code. But I'm being technically challenged: public delegate List GetAllNodesDelegate(int k); GetAllNodesDelegate del = new GetAllNodesDelegate(ClassCurrent.GetAllRootNodes); I got the following error: An object reference is required for the non-static field, method, property ClassCurrent<BaseNode>.GetAllRootNodes(int) I might have misunderstood something... but if I have to manually declare a delegate at the calling class, AND to pass in the function manually as above, then I'm starting to question whether delegate is a good way to handle my problem. Thanks.

    Read the article

  • SOAP Messages on iPhone

    - by CocoaNewBee
    Hello everyone !! I have to use several SOAP messages to get data from a web service. I got some examples how to do that but they all have the XML (http://icodeblog.com/2008/11/03/iphone-programming-tutorial-intro-to-soap-web-services/) // ---- LOGIN ----- NSString *soapMessage = [NSString stringWithFormat: @"<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" "<soap12:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:soap12=\"http://www.w3.org/2003/05/soap-envelope\">\n" "<soap12:Body>\n" "<Login xmlns=\"http://tempuri.org/\">\n" "<sUserID>USER</sUserID>\n" "<sUserPw>PASSWORD</sUserPw>\n" "<sDomain>SERVER</sDomain>\n" "</Login>\n" "</soap12:Body>\n" "</soap12:Envelope>\n" ]; NSString *urlToSend = [[NSString alloc] initWithString:@"http://SERVER/DIRECTORY/WSDL_FILE.ASMX"]; NSString *callTOMake = [[NSString alloc] initWithString:@"http://WEBSERVER/Login"]; TWO questions: 1) Does it make sense to read the SOAP Message from a class or a file into xcode?? or sould I just define them thru the code ?? 2) I used SOAPUI & .NET to query the service. It works fine... but when I do it from the iphone simulator it returns the following: 2010-03-10 15:13:54.773 Hello_SOAP[91204:207] soap:ClientServer did not recognize the value of HTTP Header SOAPAction: http://WEBSERVER/DIRECTORY/Login How can I figure out what the issue is that's causing the said error on the simulator??

    Read the article

  • Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • Wordpress: How to override all default theme CSS so your custom one is loaded the last?

    - by mickael
    I have a problem where I've been able to include a custom css in the section of my wordpress theme with the following code: function load_my_style_wp_enqueue_scripts() { wp_register_style('my_styles_css', includes_url("/css/my_styles.css")); wp_enqueue_style('my_styles_css'); } add_action('wp_enqueue_scripts','load_my_style_wp_enqueue_scripts'); But the order in the source code is as follows: <link rel='stylesheet' id='my_styles_css-css' href='http://...folderA.../my_styles.css?ver=3.1' type='text/css' media='all' /> <link rel="stylesheet" id="default-css" href="http://...folderB.../default.css" type="text/css" media="screen,projection" /> <link rel="stylesheet" id="user-css" href="http://...folderC.../user.css" type="text/css" media="screen,projection" /> I want my_styles_css to be the last file to load, overriding default and user files. I've tried to achieve this by modifying wp_enqueue_style in different ways, but without any success. I've tried: wp_enqueue_style('my_styles_css', array('default','user')); or wp_enqueue_style('my_styles_css', false, array('default','user'), '1.0', 'all'); I've seen some related questions without answer or with these last 2 methods that are still failing for me. The function above is part of a plugin that I've got enabled in my wordpress installation.

    Read the article

  • My Android XML files can't find ActionBarSherlock themes

    - by MalcolmMcC
    I'm trying to develop an app with ActionBarSherlock and everything works except the theming. Specifically, I can import com.actionbarsherlock.app.*, extend SherlockActivity, but I always have this error in my manifest: Error: No resource found that matches the given name (at 'theme' with value '@style/Theme.Sherlock'). I know there have been plenty of questions asked about this, but they have not worked for me. I have tried refreshing the workspace cleaning all of my projects putting the line in both the <activity> and the <application> setting my targetSdkVersion and minSdkVersion to various values, in both my manifest and ABS's and I've tried the following variations, and probably others: android:theme="@style/Theme.Sherlock" android:theme="@android:style/Theme.Sherlock" theme="@style/Theme.Sherlock" theme="@android:style/Theme.Sherlock" theme="@theme/Theme.Sherlock" theme="@android:theme/Theme.Sherlock" It's worth noting that the autocomplete after I typed "@style/" was showing nothing, so I tried making my own style in styles.xml and then that showed up but still nothing from ActionBarSherlock. Also, in styles.xml, I tried to make my own theme to extend Theme.Sherlock, and @style/Theme.Sherlock was not found there either when I tried to add it as a parent. I tried loading the samples but got a JAR Mismatch. My conclusion is that somehow my xml files are unable to access the ABS library, but I'm at a loss as to how to fix it. Any help hugely appreciated.

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • Creating ODT and PDF files as end result

    - by Bill Zimmerman
    Hello, I've been working on an app to create various document formats for a while now, and I've had limited success. Ideally, I'd like to dynamically create a fairly simple ODT/PDF/DOC file. I've been focusing my efforts on ODT, because it is editable, and open enough that there are several tools which will convert it to any of the other formats I need. The problem is that the ODT XML files are NOT simple, and there aren't any good-quality API's I could find (especially in python). So far, I've had the most success creating a template ODT file, and then manipulating the DOM in python as needed. This is ok generally, but is quickly becoming inadequate and requires too much tweaking every single time I need to alter one of the templates. The requirements are: 1) Produce a simple document that will have lists, paragraphs, and the ability to draw simple graphics on the page (boxes, circles, etc...) 2) The ability to specify page size, and the different formats should generally print the exact same output when sent to a printer My questions: 1) Are there any other ways I can produce ODT/PDF/DOC files? 2) Would LaTeX be acceptable? I've never really used it, does anyone have experience converting LaTeX files into other formats? 3) Would it be possible to use HTML? There are a lot of converters online. Technically you can specify dimensions in mm/cm, etc..., but I am worried that the printed output will differ between browsers/converters.... Any other ideas?

    Read the article

  • Why doesn't Default route work using Html.ActionLink in this case?

    - by StuperUser
    I have a rather perculiar issue with routing. Coming back to routing after not having to worry about configuration for it for a year, I am using the default route and ignore route for resources: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }); I have a RulesController with an action for Index and Lorem and a Index.aspx, Lorem.aspx in Views Rules directory. I have an ActionLink aimed at Rules/Index on the maseter page: <li><div><%: Html.ActionLink("linkText", "Index", "Rules")%></div></li> The link is being rendered as http://localhost:12345/Rules/ and am getting a 404. When I type Index into the URL the application routes it to the action. When I change the default route action from "Index" to "Lorem", the action link is being rendered as http://localhost:12345/Rules/Index adding the Index as it's no longer on the default route and the application routes to the Index action correctly. I have used Phil Haack's Routing Debugger, but entering the url http://localhost:12345/Rules/ is causing a 404 using that too. I think I've covered all of the rookie mistakes, relevant SO questions and basic RTFMs. I'm assuming that "Rules" isn't any sort of reserved word in routing. Other than updating the Routes and debuugging them, what can I look at?

    Read the article

  • Explaining the forecasts from an ARIMA model

    - by Samik R.
    I am trying to explain to myself the forecasting result from applying an ARIMA model to a time-series dataset. The data is from the M1-Competition, the series is MNB65. For quick reference, I have a google doc spreadsheet with the data. I am trying to fit the data to an ARIMA(1,0,0) model and get the forecasts. I am using R. Here are some output snippets: > arima(x, order = c(1,0,0)) Series: x ARIMA(1,0,0) with non-zero mean Call: arima(x = x, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.9421 12260.298 s.e. 0.0474 202.717 > predict(arima(x, order = c(1,0,0)), n.ahead=12) $pred Time Series: Start = 53 End = 64 Frequency = 1 [1] 11757.39 11786.50 11813.92 11839.75 11864.09 11887.02 11908.62 11928.97 11948.15 11966.21 11983.23 11999.27 I have a few questions: (1) How do I explain that although the dataset shows a clear downward trend, the forecast from this model trends upward. This also happens for ARIMA(2,0,0), which is the best ARIMA fit for the data using auto.arima (forecast package) and for an ARIMA(1,0,1) model. (2) The intercept value for the ARIMA(1,0,0) model is 12260.298. Shouldn't the intercept satisfy the equation: C = mean * (1 - sum(AR coeffs)), in which case, the value should be 715.52. I must be missing something basic here. (3) This is clearly a series with non-stationary mean. Why is an AR(2) model still selected as the best model by auto.arima? Could there be an intuitive explanation? Thanks.

    Read the article

  • Cross compiling from MinGW on Fedora 12 to Windows - console window?

    - by elcuco
    After reading this article http://lukast.mediablog.sk/log/?p=155 I decided to use mingw on linux to compile windows applications. This means I can compile, test, debug and release directly from Linux. I hacked this build script which will cross compile the application and even package it in a ZIP file. Note that I am using out of source builds for QMake (did you even know this is supported? wow...). Also note that the script will pull the needed DLLs automagically. Here is the script for you all internets to use and abuse: #! /bin/sh set -x set -e VERSION=0.1 PRO_FILE=blabla.pro BUILD_DIR=mingw_build DIST_DIR=blabla-$VERSION-win32 # clean up old shite rm -fr $BUILD_DIR mkdir $BUILD_DIR cd $BUILD_DIR # start building QMAKESPEC=fedora-win32-cross qmake-qt4 QT_LIBINFIX=4 config=\"release\ quiet\" ../$PRO_FILE #qmake-qt4 -spec fedora-win32-cross make DLLS=`i686-pc-mingw32-objdump -p release/*.exe | grep dll | awk '{print $3}'` for i in $DLLS mingwm10.dll ; do f=/usr/i686-pc-mingw32/sys-root/mingw/bin/$i if [ ! -f $f ]; then continue; fi cp -av $f release done mkdir -p $DIST_DIR mv release/*.exe $DIST_DIR mv release/*.dll $DIST_DIR zip -r ../$DIST_DIR.zip $DIST_DIR The compiled binary works on the Windows7 machine I tested. Now to the questions: When I execute the application on Windows, the theme is not the Windows7 theme. I assume I am missing a style module, I am not really sure yet. The application gets a console window for some reason. The second point (the console window) is critical. How can I remove this background window? Please note that the extra config lines are not working for me, what am I missing there?

    Read the article

  • Persistant Http client connections in java

    - by Akusete
    I am trying to write a simple Http client application in Java and am a bit confused by the seemingly different ways to establish Http client connections, and efficiently re-use the objects. Current I am using the following steps (I have left out exception handling for simplicity) Iterator<URI> uriIterator = someURIs(); HttpClient client = new DefaultHttpClient(); while (uriIterator.hasNext()) { URI uri = uriIterator.next(); HttpGet request = new HttpGet(uri); HttpResponse response = client.execute(request); HttpEntity entity = response.getEntity(); InputStream s = entity.getContent(); processStream (); s.close(); } In regard to the code above, my questions is: Assuming all URI's are pointing to the same host (but different resources on that host). What is the recommended way to use a single http connection for all requests? And how do you close the connection after the last request? --edit: Also what is the difference between using uri.openConnection(), versus HttpClient? Which is preferable, and what other methods exist?

    Read the article

  • What are simple instructions for creating a Python package structure and egg?

    - by froadie
    I just completed my first (minor) Python project, and my boss wants me to package it nicely so that it can be distributed and called from other programs easily. He suggested I look into eggs. I've been googling and reading, but I'm just getting confused. Most of the sites I'm looking at explain how to use Python eggs that were already created, or how to create an egg from a setup.py file (which I don't yet have). All I have now is an Eclipse pydev project with about 4 modules and a settings/configuration file. In easy steps, how do I go about structuring it into folders/packages and compiling it into an egg? And once it's an egg, what do I have to know about deploying/building/using it? I'm really starting from scratch here, so don't assume I know anything; simple step-by-step instructions would be really helpful... These are some of the sites that I've been looking at so far: http://peak.telecommunity.com/DevCenter/PythonEggs http://www.packtpub.com/article/writing-a-package-in-python http://www.ibm.com/developerworks/library/l-cppeak3.html#N10232 I've also browsed a few SO questions but haven't really found what I need. Thanks!

    Read the article

  • Catch a thread's exception in the caller thread in Python

    - by Mikee
    Hi Everyone, I'm very new to Python and multithreaded programming in general. Basically, I have a script that will copy files to another location. I would like this to be placed in another thread so I can output "...." to indicate that the script is still running. The problem that I am having is that if the files cannot be copied it will throw an exception. This is ok if running in the main thread; however, having the following code does not work: try: threadClass = TheThread(param1, param2, etc.) threadClass.start() ##### **Exception takes place here** except: print "Caught an exception" In the thread class itself, I tried to re-throw the exception, but it does not work. I have seen people on here ask similar questions, but they all seem to be doing something more specific than what I am trying to do (and I don't quite understand the solutions offered). I have seen people mention the usage of sys.exc_info(), however I do not know where or how to use it. All help is greatly appreciated! EDIT: The code for the thread class is below: class TheThread(threading.Thread): def __init__(self, sourceFolder, destFolder): threading.Thread.__init__(self) self.sourceFolder = sourceFolder self.destFolder = destFolder def run(self): try: shul.copytree(self.sourceFolder, self.destFolder) except: raise

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • How to create platform independent 3D video on 3D TV via HDMI 1.4?

    - by artif
    I am writing a real-time, interactive 3D visualization program and at each point in the program, I can compute 2 images (bitmaps) that are meant to look 3D together by means of stereoscopy. How do I get my program to display the image pairs such that they look 3D on a 3D TV? Is there a platform independent way of accomplishing it? (By platform I mean independent of GPU brand, operating system, 3D TV vendor, etc.) If not, which is preferable-- to lock in by GPU, OS, or 3D TV? I suppose I need to be using an HDMI 1.4 cable with the 3D TV? HDMI 1.4 can encode stereoscopy via side-by-side method. But how do I send such an encoded signal to the monitor? What kind of libraries do I use for this sort of thing? Windows DirectShow? If DirectShow is correct, is there a cross platform equivalent available? If anyone asks, yes I have seen this question: http://stackoverflow.com/questions/2811350/generating-3d-tv-stereoscopic-output-programmatically. However, correct me if I am wrong, it does not appear to be what I'm looking for. I do not have an OpenGL or Direct3D program that generates polygons, for which a Nvidia card can do ad-hoc impromptu stereoscopy simply by rendering the scene from 2 slightly offset points of view and then displaying those 2 images on the monitor-- my program already has those image pairs and needs to display them (and they are not the result of rendering polygons). Btw, I have never done any major multimedia programming before and know very little about HDMI, Direct Show, 3D TVs, etc so pardon me if any parts of this question did not make any sense at all.

    Read the article

  • Pushing a local mercurial repository to a remote server or cloning at server from local

    - by Samaursa
    I have a local repository that I have now decided to push to a remote server (for example, I have a host that allows mercurial repositories and I am also trying to push to bitbucket). The repository has a lot of files and is a little more than 200mb. Locally, I am able to clone the repository without problems. Now I have a lot of changes in this repository, and I have wasted a couple of days trying to figure out how to get the remote server to clone my repository. I cannot get hg serve to work outside of the LAN. I have tried everything. So instead, I created a new repository at the remote servers (both at the host and bitbucket) with nothing in it. Now I am pushing the complete repository that I have locally to these remote locations. So far it has been unsuccessful, as the push operation is stuck on searching for changes and does not give me any other useful output. I have let it go for about an hour with no change. Now my questions is, what am I doing wrong as far as hg serve is concerned? I can access it locally but not remotely (through DynDns - I have configured it properly and the router forwards the ports correctly) so that I can get the server to clone the repository the first time after which I will be pushing to it. My second question is, assuming the clone at server does not work (for example, if I was to push my current repository to bitbucket), is creating an empty repository at the server and then pushing a local repository to the new remote repository ok? Is that the source of the searching for changes problem? Any help in this regard would be greatly appreciated.

    Read the article

  • Releasing Excel after using Interop

    - by figus
    Hi everyone I've read many post looking for my answer, but all are similar to this: http://stackoverflow.com/questions/1610743/reading-excel-files-in-vb-net-leaves-excel-process-hanging My problem is that I don't quit the app... The idea is this: If a User has Excel Open, if he has the file I'm interested in open... get that Excel instance and do whatever I want to do... But I don't to close his File after I'm done... I want him to keep working on it, the problem is that when he closes Excel... The process keeps running... and running... and running after the user closes Excel with the X button... this is how I try to do it This piece is used to know if he has Excel open, and in the For I check for the file name I'm interested in. Try oApp = GetObject(, "Excel.Application") libroAbierto = True For Each libro As Microsoft.Office.Interop.Excel.Workbook In oApp.Workbooks If libro.Name = EquipoASeccionIdSeccion.Text & ".xlsm" Then Exit Try End If Next libroAbierto = False Catch ex As Exception oApp = New Microsoft.Office.Interop.Excel.Application End Try here would be my code... if he hasn't Excel open, I create a new instance, open the file and everything else. My code ends with this: If Not libroAbierto Then libroSeccion.Close(SaveChanges:=True) oApp.Quit() Else oApp.UserControl = True libroSeccion.Save() End If System.Runtime.InteropServices.Marshal.FinalReleaseComObject(libroOriginal) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(libroSeccion) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(origen) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(copiada) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(oApp) libroOriginal = Nothing libroSeccion = Nothing oApp = Nothing origen = Nothing copiada = Nothing nuevosGuardados = True So you can see that, if I opened the file, I call oApp.Quit() and everything else and the Excel Process ends after a few seconds (maybe 5 aprox.) BUT if I mean the user to keep the file open (not calling Quit()), Excel process keeps running after the user closes Excel with the X button. Is there any way to do what I try to do?? Control a open instance of excel and releasing everything so when the user closes it with the X button, the Excel Process dies normally??? Thanks!!!

    Read the article

  • How can I use JSONP to download client-side javascript objects?

    - by Alex Mcp
    I'm trying to get client-side javascript objects saved as a file locally. I'm not sure if this is possible. The basic architecture is this: Ping an external API to get back a JSON object Work client-side with that object, and eventually have a "download me" link This link sends the data to my server, which processes it and sends it back with a mime type application/json, which (should) prompt the user to download the file locally. Right now here are my pieces: Server Side Code <?php $data = array('zero', 'one', 'two', 'testing the encoding'); $json = json_encode($data); //$json = json_encode($_GET['']); //eventually I'll encode their data, but I'm testing header("Content-type: application/json"); header('Content-Disposition: attachment; filename="backup.json"'); echo $_GET['callback'] . ' (' . $json . ');'; ?> Relevant Client Side Code $("#download").click(function(){ var json = JSON.stringify(collection); //serializes their object $.ajax({ type: "GET", url: "http://www.myURL.com/api.php?callback=?", //this is the above script dataType: "jsonp", contentType: 'jsonp', data: json, success: function(data){ console.log( "Data Received: " + data[3] ); } }); return false; }); Right now when I visit the api.php site with Firefox, it prompts a download of download.json and that results in this text file, as expected: (["zero","one","two","testing the encoding"]); And when I click #download to run the AJAX call, it logs in Firebug Data Received: testing the encoding which is almost what I'd expect. I'm receiving the JSON string and serializing it, which is great. I'm missing two things: The Actual Questions What do I need to do to get the same prompt-to-download behavior that I get when I visit the page in a browser (much simpler) How do I access, server-side, the json object being sent to the server to serialize it? I don't know what index it is in the GET array (silly, I know, but I've tried almost everything)

    Read the article

  • Return value from Object match

    - by Hito_kun
    I'm, by no means, JS fluent, so forgive me if im asking for some really basic stuff, but I've not being able to find a proper answer to my question. Im writting my first Node.js (plus Extra Framework and Socket.io) app and Im having some fun setting up the server side of a FB-like messenger (surprise!!!). So, let's say I have this data structure to store online users(This is a JSON Array, but I'm not sure it is the best way to do it or should I go with Javascript Objects): [ { "site": 45, "users": [ { "idUser": 5, "idSocket": "qwe87r7w8qwe", "name": "Carlos Ray Norris" }, { "idUser": 6, "idSocket": "v8d9d0fgfs7d", "name": "John Connor" } ] }, { "site": 48, "users": [ { "idUser": 22, "idSocket": "qwe87r7w8qwe", "name": "David Bowie" }, { "idUser": 23, "idSocket": "v8d9d0fgfs7d", "name": "Barack H. Obama" } ] } ] What I want to do is to search in the array for x value given y. In this case, retrieving the idSocket knowing the idUser WITHOUT having to run through the array values. So I have basically 2 questions: first, what would be the proper way to store users online? and secondly, how to find values matching with the values I already know (find the idSocket that has a given idUser). I would like a pure JS approach(or using some of the tools given by Node, Socket.io or Express), but if that's not possible then I can look for some JQuery.

    Read the article

  • When downloading a file using FileStream, why does page error message refers to aspx page name, not

    - by StuperUser
    After building a filepath (path, below) in a string (I am aware of Path in System.IO, but am using someone else's code and do not have the opportunity to refactor it to use Path). I am using a FileStream to deliver the file to the user (see below): FileStream myStream = new FileStream(path, FileMode.Open, FileAccess.Read); long fileSize = myStream.Length; byte[] Buffer = new byte[(int)fileSize + 1]; myStream.Read(Buffer, 0, (int)myStream.Length); myStream.Close(); Response.ContentType = "application/csv"; Response.AddHeader("content-disposition", "attachment; filename=" + filename); Response.BinaryWrite(Buffer); Response.Flush(); Response.End(); I have seen from: http://stackoverflow.com/questions/736301/asp-net-how-to-stream-file-to-user reasons to avoid use of Response.End() and Response.Close(). I have also seen several articles about different ways to transmit files and have diagnosed and found a solution to the problem (https and http headers) with a colleague. However, the error message that was being displayed was not about access to the file at path, but the aspx file. Edit: Error message is: Internet Explorer cannot download MyPage.aspx from server.domain.tld Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. (page name and address anonymised) Why is this? Is it due to the contents of the file coming from the HTTP response .Flush() method rather than a file being accessed at its address?

    Read the article

  • Java try finally variations

    - by Petr Gladkikh
    This question nags me for a while but I did not found complete answer to it yet (e.g. this one is for C# http://stackoverflow.com/questions/463029/initializing-disposable-resources-outside-or-inside-try-finally). Consider two following Java code fragments: Closeable in = new FileInputStream("data.txt"); try { doSomething(in); } finally { in.close(); } and second variation Closeable in = null; try { in = new FileInputStream("data.txt"); doSomething(in); } finally { if (null != in) in.close(); } The part that worries me is that the thread might be somewhat interrupted between the moment resource is acquired (e.g. file is opened) but resulting value is not assigned to respective local variable. Is there any other scenarios the thread might be interrupted in the point above other than: InterruptedException (e.g. via Thread#interrupt()) or OutOfMemoryError exception is thrown JVM exits (e.g. via kill, System.exit()) Hardware fail (or bug in JVM for complete list :) I have read that second approach is somewhat more "idiomatic" but IMO in the scenario above there's no difference and in all other scenarios they are equal. So the question: What are the differences between the two? Which should I prefer if I do concerned about freeing resources (especially in heavily multi-threading applications)? Why? I would appreciate if anyone points me to parts of Java/JVM specs that support the answers.

    Read the article

  • HTML entity encoding (convert '<' to '&lt;') on iPhone in objective-c

    - by Markus
    I'm developing an application for the iPhone that has inApp-mail sending capabilities. So far so good, but now I want to avoid html-injections as some parts of the mail are user-generated texts. Basically I search for something like this: // inits NSString *sourceString = [NSString stringWithString:@"Hello world! Grüße dich Welt <-- This is in German."]; // ----- THAT'S WHAT I'M LOOKING FOR // pseudo-code | // V NSString *htmlEncodedString = [sourceString htmlEncode]; // log NSLog(@"source string: %@", sourceString); NSLog(@"encoded string: %@", htmlEncodedString); Expected output source string: Hello world! Grüße dich Welt <-- This is in German. encoded string: Hello world! Gr&#252;&#223;e dich Welt &lt;-- This is in German. I already googled and looked through several of SO's questions and answers, but all of them seem to be related to URL-encoding and that's not what I really need (I tried stringByAddingPercentEscapesUsingEncoding with no luck - it creates %C3%BC out of an 'ü' that should be an ü). A code sample would be really great (correcting mine?)... -- Thanks in advance, Markus

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >