Search Results

Search found 2436 results on 98 pages for 'hdd cloning'.

Page 81/98 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • HTML5 - Does it have the power to handle a large 2D game with a huge world?

    - by user15858
    I have been using XNA game studio, but due to private reasons (as well as the ability to publish anywhere & my heavy interest in isogenic engine), I would like to switch to HTML5. However, I have very high 2D graphic demands for my game. The game itself will have a HDD size of anywhere between 6GB (min) to 12GB (max) which would be a full game deployed offline. The size of the images aren't significantly large, so streaming would be entirely possible if only those assets required were streamed as needed. The game has a massive file size because of the sheer amount of content. For some images or spritesheets, they would be quite massive. (ex. a very large Dragon, which if animated in a spritesheet would be split into two 4096x4096 sheets or one 8192x8192 sheet). Most assets would be very small, and about 7MB for a full character with 15 animations in every direction (all animations not required immediately) so in the size of a few hundred KB to download before the game loads. My question, however, is if the graphical power of HTML5 is enough to animate several characters on screen at once, when it flips through frames quite rapidly. All my sprites have about 25 frames per animation, 5 directions (a spritesheet for each direction & animation), and run at 30fps. Upon changing direction, animation, or a new character entering, spritesheets would change and be constantly loading/unloading. If I pack all directions in a single sheet, it would be about 2048x2048 per sheet. Most frameworks have no problem with this, but I am afraid from what I read that HTML5's graphical capabilities will limit me. Since it takes significant time simply to animate characters in any language, I'd like a quick answer.

    Read the article

  • Dell Inspiron7520 and ubntu 12.04 issues

    - by user91358
    I have a DELL Inspiron 7520 in the highest configuration: 3rd Generation Intel® Core™ i7-3612QM processor (6M Cache, up to 3.1 GHz) 15.6" Full High Definition (1080p) LED Display 8GB3 Dual Channel DDR3 SDRAM at 1600MHz 1TB 5400RPM SATA HDD + 32GB mSATA SSD w/Intel Smart Response Blu-ray Disc (BD) Combo (Reads BD and Writes to DVD/CD) AMD Radeon™ HD 7730M 2GB 6.09 lbs and I have installed Ubuntu 12.04 few days ago and I'm facing some issues: 1) sometimes the whole ntb freezes and I have to hold power button for 5 secs to shut it down. I think it is something with VGA and connected external monitor. I have read somewhere that it is already a reported bug, but what I am not sure about that it is doing sporadically. Sometimes it freezes right after I log in, sometime I ran few hours and then it freezes. I am using those proprietary drivers but I wasn't been able to install those with updates. 2) the next issue is the fan is quite noisy even when the ntb is almost Idle. (max 10% CPU usage). Can you recommend me some software which could do this power management to lower the noise? I have tried CPU frequency scaling indicator, but it seems that it has not any effects. 3) and issue no. 3: when I want to log out, restart or shutdown using the menu in upper right corner the upper and left trays disappear, but programs are still running and they won't close to complete log out or shutting down the OS. When I use the CLI command, it works fine. Thanks for any help you can provide.

    Read the article

  • Video Bug after a fresh installation

    - by Matan
    Hello, I just installed Ubuntu 10.10 (I'm brand new to Ubuntu) on my laptop. I seem to have a video bug that I don't know how to deal with. When the log-in screen comes up, the boxes are way off in the corner of the screen (partially off it). When I enter my password, the screen goes black for a few seconds, then returns to the login screen. I can open a Terminal window and enter my login info that way. When I go back to Gnome (Ctrl+Alt+F7 or whatever) it shows me as "logged in" but I still can't get to the desktop. If anyone has any advice, I'd love to hear it--just try to use simple language, please, since I really don't know Linux at all yet! I'm running an Averatec 3700 Series: Mobile AMD Sempron 3000+ 512 MB DDR, 80 GB HDD After looking at this question I tried going in through Failsafe mode (took me a while to figure out the hold-shift-while-booting thing _<) and playing around with the resolution. Setting a somewhat wider resolution did seem to fix things so that I can log into regular GNOME, I think. I'm not sure if this fix will persist, but it seems like it might!

    Read the article

  • How do I make a Live USB without destroying my Windows computer?

    - by user71089
    Unlike every other post I have read, let me begin by saying that I am TERRIFIED of destroying my Windows 7 home system in the attempt of making a bootable Ubuntu Thumb Drive. I very specifically DO NOT want to attempt configuring a dual boot system. What I am seeking is a portable operating system on a thumb drive, through which I can run my COREL software via VirtualBox-4.1.16-78094-Win. I want to be able to use my own software on my work computer without any worries about screwing up the host system anywhere I go. Supposedly, I just made a bootable Thumb drive, having successfully loaded the ISO for ubuntu-12.04-desktop-i386 using Pendrivelinux's Universal-USB-Installer-1.9.0.1 However, all that I get is a Thumb Drive that wants to install Unbutu onto my PC's HDD. I am not finding any clear path to this end in the posts I am reading. Like it or not, Windows is a fact of life for me. The goal is to be able to use my software on my work PC without doing anything intrusive that will cost me my job. If I have a meltdown on my PC at home trying to make this happen, it will be nearly as bad. Is this even Do-Able? Can the process be made clear? Thank You, Ubuntu World ...!

    Read the article

  • Ubuntu (i386 - 32bit) 12.04 LTS [DESKTOP] - Freezes During Preparing to Install

    - by Michael Ecklund
    I know this is for the Desktop version of Ubuntu, but I definitely plan on installing the server as well. I just want a GUI to manage my server. Here is what I have done: Placed Ubuntu i386/32bit 12.04 LTS [DESKTOP] disc in the disc tray. The disc loads fine. Clicked install Ubuntu. Without checking the download updates while installing or install this third-party software. Clicked continue. Mouse cursor turns to spinning circle and remains a spinning circle while the screen freezes its place at "Preparing to install Ubuntu". I tried not checking any of the boxes and clicking continue. I tried checking Download updates while installing. I tried checking Install this third-party software. I tried checking BOTH Download updates while installing AND Install this third-party software. Does anyone else face this very same issue? Is there a workaround for this problem? Do I need to use a lower version of Ubuntu? If so, which version do you recommend for my system specifications? My system meets the system requirements. Here are my exact system specifications. (Custom modifications: 320GBx2 HDD && 256MB AGP GFX card && 1GBx2 RAM)

    Read the article

  • Ubuntu is unresponsive and scrolling is very jerky

    - by peterpipe
    I'm new to Ubuntu. I've been using 12.04LTS for over a week now. For 3 days it was great but now I'm having a number of problems. I'm using an Acer Aspire one D255 it has a 250GB HDD and a 1GB Memory. My browser is Firefox. Pages freeze occasionally and I wait for over 10 minutes and have to use the computer's 'off' button. Pages crash. On facebook some pictures don't load and I have to close down and restart a couple of times. Scrolling can sometimes be very jerky. Downloading software has become a problem and I have a 'no entry' sign in the top taskbar. I've looked at this page for some help but the words used and the explanations are not helpful to someone like me who has little knowledge of this. Please - 'Ubuntu for dummies'. I'd like to continue using Ubuntu but am close to giving up.

    Read the article

  • trying to upgrade memory

    - by user214876
    I've been using Ubuntu on my laptop for awhile now. Not quite used to it yet. I've got a Acer Aspire with an orig 4 gig mem/500 gig HDD. Running 12.04 presently 32 bit sys. I have the 13.04 upgrade disc and want to upgrade my memory to 8 Gig. Everytime I install the 8 gig memory, the system won't boot to either version. I downloaded the 64 bit version of both versions of Ubuntu but no results yet. Can anyone offer a suggestion here? I'm kinda lost. Additional Information: The memory was purchased through Acer/Kingston. Recommended for this computer. I watched the video on installing it, so I doubt it's installed wrong. (There's only one way of putting it in). I swapped Op Sys, from Ubuntu 12.04 to 13.04 to 13.10 and now to Xubuntu 13.10 64 bit version. I'm still not having any luck with this upgrade. Would it be necessary to upgrade the CPU? It's just a thought, I don't know what else could keep me from utilizing the new memory. Additional Information: Called Kingston this afternoon, they are sending replacement lower density memory modules 2/4 gig - 8 gig. Tech service says I need to upgrade BIOS to utilize new memory install VIA Dos since it is no longer a windows system. I'm not sure how to go about that but it's a learning process I can live with. Thank you all for your help/support. I realize this isn't a Ubuntu problem but each new user of this op sys, seems to share simular problems and maybe someone can use this info to their advantage.

    Read the article

  • In facebook pictures bdon't always load properly and scrolling is very jerky

    - by peterpipe
    I'm new to Ubuntu. I've been using 12.04LTS for over a week now. For 3 days it was great but now I'm having a number of problems. I'm using an Acer Aspire one D255 it has a 250GB HDD and a 1GB Memory. My browser is Firefox. Pages freeze occasionally and I wait for over 10 minutes and have to use the computer's 'off' button. Pages crash. On facebook some pictures don't load and I have to close down and restart a couple of times. Scrolling can sometimes be very jerky. Downloading software has become a problem and I have a 'no entry' sign in the top taskbar. I've looked at this page for some help but the words used and the explanations are not helpful to someone like me who has little knowledge of this. Please - 'Ubuntu for dummies'. I'd like to continue using Ubuntu but am close to giving up.

    Read the article

  • iTunes for Ubuntu Studio

    - by soundblastdj
    I have finally gotten my old Mac HDD sorted out, and now I would like to know if anybody has either: a) a way to run iTunes without wine, as it did not work out well for me the last time I tried it, or b) any other media player that will sync with an iPod and, more importantly, use the same file system. When my Mac died, I started to get into open source. I bought a MacBook Air, only out of necessity. For almost two years now, I have not once backed up or synced my iPod. I am getting nervous that it may give up on it's life soon and would like to find a solution. I don't have enough room on my Air, and it would just erase my iPod anyway... Another thing that I am having trouble with is the way iTunes arranged the music. Now, it is arranged all by artist, then album, the song and I would like to have a media library, but somewhere around 400GB of music is a lot to sift through (I have attempted in the past). Thus I am looking for something that will use the same library format. A side note: As I was writing this I started to wonder; Is a Hackintosh in order here? If somebody will give me instructions on how to install MacOSX for free (maybe Mavericks?) in a dual boot with Ubuntu, I will be ever grateful. :) Thanks, soundblastdj

    Read the article

  • Ubuntu 12.04 login screen flickers. “Could not write bytes: broken pipes”

    - by Brayn
    I use Ubuntu 12.04 x64 with a dual-boot setup. Yesterday it worked fine but this morning when I attempted to boot it gets to the login screen and then it just flickers, alternating between the login screen and the console showing boot items (mainly Apache, the last one being "Battery status" although it's a desktop) all with [OK] status. The only error that I can see is: "Could not write bytes: broken pipes" on top of the screen. The only things I can think of that could cause this are: This morning I had a removable hdd plugged in during boot time, which I usually don't have Yesterday I've installed Dwarven Fortress that requires some x32 libraries so I've installed ia32 using synaptic. As far as I know this shouldn't brake the system but I didn't reboot yesterday so I can't be sure. I've tried booting in recovery mode and tried running the utils there but still no luck. I've ran out of ideas. Thanks. EDIT: Forgot to mention that all partitions have plenty of free space EDIT2: In the end I just reinstalled Ubuntu as time was of the essence.

    Read the article

  • Mercurial Tagging/Branching Strategy

    - by Tony Trozzo
    My current project is broken down into 3 parts: Website, Desktop Client, and a Plug-in for a third party program. We had started out originally with Subversion for our source control but decided to try Mercurial after reading Joel Spolsky's final post. Considering we haven't really used the majority of svn's potential before, we figured starting fresh with some basic ideas of how source control worked would make this transition easy. However, after setting up our initial repository, we're lost as to how tagging and branching should work on a project like this. Essentially, we're working on all 3 of these parts at the same time. We want a release to be a combination of the 3 parts. Currently we're working in one repository. For the Plug-in part, we have the first iteration finished which we've been referring to as Plug-In v0.1. For the first official build of the other two parts, we'd also like to refer to them as Website v0.1 and Desktop Client v0.1. When all three parts are at v0.1, we'd like to have a Full Project v0.1. Our problem is we're not sure how to manage all of this in the Hg repository. Would the best way to handle this be to create 3 separate repositories for the 3 stable versions and then 3 more repositories for the current developments? Currently we have this all in one repository. Should we do this in branches (are branches any different from cloning repositories?) and tags? Any help is greatly appreciated.

    Read the article

  • Tools for Maintaining Branches in SVN

    - by Chris Conway
    My team uses SVN for source control. Recently, I've been working on a branch with occasional merges from the trunk and it's been a fairly annoying experience (cf. Joel Spolsky's "Subversion Story #1"), so I've been looking alternative ways to manage branches and merging. Given that a centralized SVN repository is non-negotiable, what I'd like is a set of tools that satisfy the following conditions. Complete revision history should be stored in SVN for both trunk and branches. Merging in either direction (and potentially criss-crossing) should be relatively painless. Merging history should be stored in SVN to the greatest extent possible. I've looked at both git-svn and bzr-svn and neither seems to be up to the job—basically, given the revision history they can export from the SVN repository, they can't seem to do any better a job handling merges than SVN can. For example, after cloning the repository with git, the revision history for my branch shows the original branch off of trunk, but git doesn't "see" any of the interim SVN merges as "native" merges—the revision history is one long line. As a result, any attempts to merge from trunk in git yield just as many conflicts as an SVN merge would. (Besides, the git-svn documentation explicitly warns against using git to merge between branches.) Is there a way to adjust my workflow to make git satisfy the above requirements? Maybe I just need tips or tricks (or a separate merging tool?) to help SVN be better at merging into branches?

    Read the article

  • How do I correctly install dulwich to get hg-git working on Windows?

    - by Joshua Flanagan
    I'm trying to use the hg-git Mercurial extension on Windows (Windows 7 64-bit, to be specific). I have Mercurial and Git installed. I have Python 2.5 (32-bit) installed. I followed the instructions on http://hg-git.github.com/ to install the extension. The initial easy_install failed because it was unable to compile dulwich without Visual Studio 2003. I installed dulwich manually by: git clone git://git.samba.org/jelmer/dulwich.git cd dulwich c:\Python25\python setup.py --pure install Now when I run easy_install hg-git, it succeeds (since the dulwich dependency is satisfied). In my C:\Users\username\Mercurial.ini, I have: [extensions] hgext.bookmarks = hggit = When I type 'hg' at a command prompt, I see: "* failed to import extension hggit: No module named hggit" Looking under my c:\Python25 folder, the only reference to hggit I see is Lib\site-packages\hg_git-0.2.1-py2.5.egg. Is this supposed to be extracted somewhere, or should it work as-is? Since that failed, I attempted the "more involved" instructions from the hg-git page that suggested cloning git://github.com/schacon/hg-git.git and referencing the path in my Mercurial configuration. I cloned the repo, and changed my extensions file to look like: [extensions] hgext.bookmarks = hggit = c:\code\hg-git\hggit Now when I run hg, I see: * failed to import extension hggit from c:\code\hg-git\hggit: No module named dulwich.errors. Ok, so that tells me that it is finding hggit now, because I can see in hg-git\hggit\git_handler.py that it calls from dulwich.errors import HangupException That makes me think dulwich is not installed correctly, or not in the path. Update: From Python command line: import dulwich yields Import Error: No module named dulwich However, under C:\Python25\Lib\site-packages, I do have a dulwich-0.5.0-py2.5.egg folder which appears to be populated. This was created by the steps mentioned above. Is there an additional step I need to take to make it part of the Python "path"?

    Read the article

  • Deriving an HTMLElement Object from jQuery Object

    - by Jasconius
    I'm doing a fairly exhaustive series of DOM manipulations where a few elements (specifically form elements) have some events. I am dynamically creating (actually cloning from a source element) several boxes and assigning a change() event to them. The change event executes, and within the context of the event, "this" is the HTML Element Object. What I need to do at this point however is determine a contact for this HTML Element Object. I have these objects stored already as jQuery entities in assorted arrays, but obviously [HTMLElement Object] != [Object Object] And the trick is that I cannot cast $(this) and make a valid comparison since that would create a new object and the pointer would be different. So... I've been banging my head against this for a while. In the past I've been able to circumvent this problem by doing an innerHTML comparison, but in this case the objects I am comparing are 100% identical, just there's lots of them. Therefore I need a solid comparison. This would be easy if I could somehow derive the HTMLElement object from my originating jQuery object. Thoughts, other ideas? Help. :(

    Read the article

  • call/cc in Lua - Possible?

    - by Pessimist
    The Wikipedia article on Continuation says: "In any language which supports closures, it is possible to write programs in continuation passing style and manually implement call/cc." Either that is true and I need to know how to do it or it is not true and that statement needs to be corrected. If this is true, please show me how to implement call/cc in Lua because I can't see how. I think I'd be able to implement call/cc manually if Lua had the coroutine.clone function as explained here. If closures are not enough to implement call/cc then what else is needed? The text below is optional reading. P.S.: Lua has one-shot continuations with its coroutine table. A coroutine.clone function would allow me to clone it to call it multiple times, thus effectively making call/cc possible (unless I misunderstand call/cc). However that cloning function doesn't exist in Lua. Someone on the Lua IRC channel suggested that I use the Pluto library (it implements serialization) to marshal a coroutine, copy it and then unmarshal it and use it again. While that would probably work, I am more interested in the theoretical implementation of call/cc and in finding what is the actual minimum set of features that a language needs to have in order to allow for its manual implementation.

    Read the article

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Managing large binary files with git

    - by pi
    Hi there. I am looking for opinions of how to handle large binary files on which my source code (web application) is dependent. We are currently discussing several alternatives: Copy the binary files by hand. Pro: Not sure. Contra: I am strongly against this, as it increases the likelihood of errors when setting up a new site/migrating the old one. Builds up another hurdle to take. Manage them all with git. Pro: Removes the possibility to 'forget' to copy a important file Contra: Bloats the repository and decreases flexibility to manage the code-base and checkouts/clones/etc will take quite a while. Separate repositories. Pro: Checking out/cloning the source code is fast as ever, and the images are properly archived in their own repository. Contra: Removes the simpleness of having the one and only git repository on the project. Surely introduces some other things I haven't thought about. What are your experiences/thoughts regarding this? Also: Does anybody have experience with multiple git repositories and managing them in one project? Update: The files are images for a program which generates PDFs with those files in it. The files will not change very often(as in years) but are very relevant to a program. The program will not work without the files. Update2: I found a really nice screencast on using git-submodule at GitCasts.

    Read the article

  • Working with complex objects in Prevayler commands

    - by alexantd
    The demos included in the Prevayler distribution show how to pass in a couple strings (or something simple like that) into a command constructor in order to create or update an object. The problem is that I have an object called MyObject that has a lot of fields. If I had to pass all of them into the CreateMyObject command manually, it would be a pain. So an alternative I thought of is to pass my business object itself into the command, but to hang onto a clone of it (keeping in mind that I can't store the BO directly in the command). Of course after executing this command, I would need to make sure to dispose of the original copy that I passed in. public class CreateMyObject implements TransactionWithQuery { private MyObject object; public CreateMyObject(MyObject business_obj) { this.object = (MyObject) business_obj.clone(); } public Object executeAndQuery(...) throws Exception { ... } } The Prevayler wiki says: Transactions can't carry direct object references (pointers) to business objects. This has become known as the baptism problem because it's a common beginner pitfall. Direct object references don't work because once a transaction has been serialized to the journal and then deserialized for execution its object references no longer refer to the intended objects - - any objects they may have referred to at first will have been copied by the serialization process! Therefore, a transaction must carry some kind of string or numeric identifiers for any objects it wants to refer to, and it must look up the objects when it is executed. I think by cloning the passed-in object I will be getting around the "direct object pointer" problem, but I still don't know whether or not this is a good idea...

    Read the article

  • Using same onmouseover function for multiple objects

    - by phpscriptcoder
    I'm creating a building game in JavaScript and PHP that involves a grid. Each square in the grid is a div, with an own onmouseover and onmousedown function: for(x=0; x < width; x++) { for(y=0; y < height; y++) { var div = document.createElement("div"); //... div.onmouseclick = function() {blockClick(x, y)} div.onmouseover = function() {blockMouseover(x, y)} game.appendChild(div); } } But, all of the squares seem to have the x and y of the last square that was added. I can sort of see why this is happening - it is making a pointer to x and y instead of cloning the variables - but how could I fix it? I even tried for(x=0; x < width; x++) { for(y=0; y < height; y++) { var div = document.createElement("div"); var myX = x; var myY = y; div.onmouseclick = function() {blockClick(myX, myY)} div.onmouseover = function() {blockMouseover(myX, myY)} game.appendChild(div); } } with the same result. I was using div.setAttribute("onmouseover", ...) which worked in Firefox, but not IE. Thanks!

    Read the article

  • 'WebException' error on back button even when calling 'void' async method

    - by BlazingFrog
    I have a windows phone app that allows the user to interact with it. Each interaction will always result in an async WCF call. In addition to that, some interactions will result in opening the browser, maps, email, etc... The problem is that, when hitting the back button, I sometime get the following error "An error (WebException) occurred while transmitting data over the HTTP channel." with the following stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.OnGetResponse(IAsyncResult result) at System.Net.Browser.ClientHttpWebRequest.<>c__DisplayClassa.<InvokeGetResponseCallback>b__8(Object state2) at System.Threading.ThreadPool.WorkItem.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() My understanding is that it's happening because my app opened another app (browser, maps, etc) before it had the time to execute the EndMyAsyncMethod(System.IAsyncResult result). Fair enough... What's really annoying is that it seems it should get fixed by cloning the server-side method, only making it void with the following operation contract [OperationContract(IsOneWay = true)] but I'm still getting the error. What's worse is that the exception is thrown in a system-generated part of the code and, thus, cannot be manually caught causing the app to just crash. I simply don't understand the need to execute an Endxxx method when it's explicitely marked as OneWay and void. EDIT I did find a similar issue here. It does seem that it is related to the message getting to the service (not the client callback). My next question is: if I'm now calling a method marked AsyncPattern and OneWay, what exactly should I be waiting for on the client to be sure the message was transmitted successfully? This is new service definition: [OperationContract(IsOneWay = true, AsyncPattern = true)] IAsyncResult BeginCacheQueryWithoutCallback(string param1, QueryInfoDataContract queryInfo, AsyncCallback cb, Object s); void EndCacheQueryWithoutCallback(IAsyncResult r); And the implementation: public IAsyncResult BeginCacheQueryWithoutCallback(string param1, QueryInfoDataContract queryInfo, AsyncCallback cb, Object s) { // do some stuff return new CompletedAsyncResult<string>(""); } public void EndCacheQueryWithoutCallback(IAsyncResult r) { }

    Read the article

  • function binding and the clone() function - Jquery

    - by Sam Vloeberghs
    Hi I have problems with my keyup binding when cloning an element. Here's the scenario: I have an html markup like this: <tr class="rijbasis"> <td> <input type="text" class="count" /> </td> <td> <span class="cost">10</span> </td> <td> <span class="total">10</span> </td> </tr> I'm binding an keyup function to the input element of my table row like this: $('.rijbasis input').keyup(function(){ var parent = $(this).parent().parent(); $('.total',parent).text(parseInt($('.cost',parent).text()) * parseInt($('.count',parent).val())); } I designed the function like this so I could clone the table row on a onclick event and append it to the tbody: $('.lineadd').click(function(){ $('.contract tbody').append($('.contract tbody tr:last').clone()); $('.contract tbody tr:last input').val("0"); }); This al works , but the keyup function doesnt work on the input elements of the cloned row.. Can anybody help or advice? I hope I was clear enough and I'll be surely adding details if needed to solve this problem. Greetings

    Read the article

  • CakePHP: Interaction between different files/classes

    - by Alexx Hardt
    Hey, I'm cloning a commercial student management system. Students use the frontend to apply for lectures, uni staff can modify events (time, room, etc). The core of the app will be the algortihm which distributes the seats to students. I already asked about it here: How to implement a seat distribution algorithm for uni lectures Now, I found a class for that algorithm here: http://www.phpclasses.org/browse/file/10779.html I put the 'class GA' into app/vendors. I need to write a 'class Solution', which represents one object (a child, and later a parent for the evolutionary process). I'll also have to write functions mutate(), crossover() and fitness(). fitness calculates a score of a solution, based on if there are overbooked courses etc; crossover() is the crazy monkey sex function which produces a child from two parents, and mutate() modifies a child after crossover. Now, the fitness()-function needs to access a few related models, and their find()-functions. It evaluates a solution's fitness by checking e.g. if there are overbooked courses, or unfulfilled wishes, and penalizes that. Where would I put the ga.php, solution.php and the three functions? ga.php has to access the functions, but the functions have to access the models. I also don't want to call any App::import()'s from within the fitness()-function, because it gets called many thousand times when the algorithm runs. Hope someone can help me. Thanks in advance =)

    Read the article

  • A generic error occurred in GDI+, JPEG Image to MemoryStream

    - by madcapnmckay
    Hi, This seems to be a bit of an infamous error all over the web. So much so that I have been unable to find an answer to my problem as my scenario doesn't fit. An exception gets thrown when I save the image to the stream. Weirdly this works perfectly with a png but gives the above error with jpg and gif which is rather confusing. Most similar problem out there relate to saving images to files without permissions. Ironically the solution is to use a memory stream as I am doing.... public static byte[] ConvertImageToByteArray(Image imageToConvert) { using (var ms = new MemoryStream()) { ImageFormat format; switch (imageToConvert.MimeType()) { case "image/png": format = ImageFormat.Png; break; case "image/gif": format = ImageFormat.Gif; break; default: format = ImageFormat.Jpeg; break; } imageToConvert.Save(ms, format); return ms.ToArray(); } } More detail to the exception. The reason this causes so many issues is the lack of explanation :( System.Runtime.InteropServices.ExternalException was unhandled by user code Message="A generic error occurred in GDI+." Source="System.Drawing" ErrorCode=-2147467259 StackTrace: at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) at System.Drawing.Image.Save(Stream stream, ImageFormat format) at Caldoo.Infrastructure.PhotoEditor.ConvertImageToByteArray(Image imageToConvert) in C:\Users\Ian\SVN\Caldoo\Caldoo.Coordinator\PhotoEditor.cs:line 139 at Caldoo.Web.Controllers.PictureController.Croppable() in C:\Users\Ian\SVN\Caldoo\Caldoo.Web\Controllers\PictureController.cs:line 132 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: OK things I have tried so far. Cloning the image and working on that. Retrieving the encoder for that MIME passing that with jpeg quality setting. Please can anyone help.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >