Search Results

Search found 54281 results on 2172 pages for 'function call'.

Page 580/2172 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • Polymorphism and passing

    - by Tucker Morgan
    Ok i am going to try and state my question as clearly as possible, but if you have trouble understanding it please just ask for clarification, i really want to figure out how to do this. I am writing a text based RPG, and i have three class that inherit from a super class, they all have special attacks that they can preform, at the same time i have a class that holds the function which handles battles in my game. Now how do i get the unique special abilities functions for whatever role the player chooses into the battle function. Also i am using the vector.push_back method to handle how my sub classes are referenced Please help me your my only hope

    Read the article

  • Lazy coding is fun

    - by Anthony Trudeau
    Every once in awhile I get the opportunity to write an application that is important enough to do, but not important enough to do the right way -- meaning standards, best practices, good architecture, et al.  I call it lazy coding.  The industry calls it RAD (rapid application development). I started on the conversion tool at the end of last week.  It will convert our legacy data to a completely new system which I'm working on piece by piece.  It will be used in the future, but only the new parts because it'll only be necessary to convert the individual pieces of the data once.  It was the perfect opportunity to just whip something together, but it was still functional unlike a prototype or proof of concept.  Although I would never write an application like this for a customer (internal or external) this methodology (if you can call it that) works great for something like this. I wouldn't be surprised if I get flamed for equating RAD to lazy coding or lacking standards, best practice, or good architecture.  Unfortunately, it fits in the current usage.  Although, it's possible to create a good, maintainable application using the RAD methodology, it's just too ripe for abuse and requires too much discipline for someone let alone a team to do right. Sometimes it's just fun to throw caution to the wind and start slamming code.

    Read the article

  • Should I include HTML markup in my JSON response?

    - by Mike M. Lin
    In an e-commerce site, when adding an item to a cart, I'd like to show a popup window with the options you can choose. Imagine you're ordering an iPod Shuffle and now you have to choose the color and text to engrave. I'd like the window to be modal, so I'm using a lightbox populated by an Ajax call. Now I have two options: Option 1: Send only the data, and generate the HTML markup using JavaScript What's nice about this is that it trims down the Ajax request to the bear minimum and doesn't mix the data with the markup. What's not so great about this is that now I need to use JavaScript to do my rendering, instead of having a template engine on the server-side do it. I might be able to clean up the approach a bit by using a client-side templating solution. Option 2: Send the HTML markup What's good about this is that I can have the same server-side templating engine I'm using for the rest of my rendering tasks (Django), do the rendering of the lightbox. JavaScript is only used to insert the HTML fragment into the page. So it clearly leaves the rendering to the rendering engine. Makes sense to me. But I don't feel comfortable mixing data and markup in an Ajax call for some reason. I'm not sure what makes me feel uneasy about it. I mean, it's the same way every web page is served up -- data plus markup -- right?

    Read the article

  • Is hiding content with JavaScript or "text-indent: -9999px" bad for SEO?

    - by Samuel
    So apparently hiding content using "display: none" is bad for SEO and seen by googlebot as being deceptive. This according to a lot of the posts I read online and questions even on this site. But what if I hide keyword rich text using javascript? A jquery example: $(function() { $('#keywordRichTextContainer').hide(); }); or using visibility hidden: $(function() { $('#keywordRichTextContainer').css({ visibility: 'hidden', position: 'absolute' }); }); Would any of these techniques cause my site to be penalized? If googlebot can't read javascript then if I'm hiding through js it shouldn't know right? What about using "text-indent: -9999px"?

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • 401 Using Multiple Authentication methods IE 10 only

    - by jon3laze
    I am not sure if this is more of a coding issue or server setup issue so I've posted it on stackoverflow and here... On our production site we've run into an issue that is specific to Internet Explorer 10. I am using jQuery doing an ajax POST to a web service on the same domain and in IE10 I am getting a 401 response, IE9 works perfectly fine. I should mention that we have mirrored code in another area of our site and it works perfectly fine in IE10. The only difference between the two areas is that one is under a subdomain and the other is at the root level. www.my1stdomain.com vs. portal.my2nddomain.com The directory structure on the server for these are: \my1stdomain\webservice\name\service.aspx \portal\webservice\name\service.aspx Inside of the \portal\ and \my1stdomain\ folders I have a page that does an ajax call, both pages are identical. $.ajax({ type: 'POST', url: '/webservice/name/service.aspx/function', cache: false, contentType: 'application/json; charset=utf-8', dataType: 'json', data: '{ "json": "data" }', success: function() { }, error: function() { } }); I've verified permissions are the same on both folders on the server side. I've applied a workaround fix of placing the <meta http-equiv="X-UA-Compatible" value="IE=9"> to force compatibility view (putting IE into compatibility mode fixes the issue). This seems to be working in IE10 on Windows 7, however IE 10 on Windows 8 still sees the same issue. These pages are classic asp with the headers that are being included, also there are no other meta tags being used. The doctype is being specified as <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//" "http://www.w3.org/TR/html4/loose.dtd"> on the portal page and <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> on the main domain. UPDATE1 I used Microsoft Network Monitor 3.4 on the server to capture the request. I used the following filter to capture the 401: Property.HttpStatusCode.StringToNumber == 401 This was the response - Http: Response, HTTP/1.1, Status: Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authetication Methods, see frame details ProtocolVersion: HTTP/1.1 StatusCode: 401, Unauthorized Reason: Unauthorized - ContentType: application/json; charset=utf-8 - MediaType: application/json; charset=utf-8 MainType: application/json charset: utf-8 Server: Microsoft-IIS/7.0 jsonerror: true - WWWAuthenticate: Negotiate - Authenticate: Negotiate WhiteSpace: AuthenticateData: Negotiate - WWWAuthenticate: NTLM - Authenticate: NTLM WhiteSpace: AuthenticateData: NTLM XPoweredBy: ASP.NET Date: Mon, 04 Mar 2013 21:13:39 GMT ContentLength: 105 HeaderEnd: CRLF - payload: HttpContentType = application/json; charset=utf-8 HTTPPayloadLine: {"Message":"Authentication failed.","StackTrace":null,"ExceptionType":"System.InvalidOperationException"} The thing here that really stands out is Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authentication Methods With this I'm still confused as to why this only happens in IE10 if it's a permission/authentication issue. What was added to 10, or where should I be looking for the root cause of this? UPDATE2 Here are the headers from the client machine from fiddler (server information removed): Main SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 9 bytes. == FLAGS ================== BitFlags: [ServerPipeReused] 0x10 X-EGRESSPORT: 44537 X-RESPONSEBODYTRANSFERLENGTH: 9 X-CLIENTPORT: 44770 UI-COLOR: Green X-CLIENTIP: 127.0.0.1 UI-OLDCOLOR: WindowText UI-BOLD: user-marked X-SERVERSOCKET: REUSE ServerPipe#46 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:2644 == TIMING INFO ============ ClientConnected: 14:43:08.488 ClientBeginRequest: 14:43:08.488 GotRequestHeaders: 14:43:08.488 ClientDoneRequest: 14:43:08.488 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:40:28.943 FiddlerBeginRequest: 14:43:08.488 ServerGotRequest: 14:43:08.488 ServerBeginResponse: 14:43:08.592 GotResponseHeaders: 14:43:08.592 ServerDoneResponse: 14:43:08.592 ClientBeginResponse: 14:43:08.592 ClientDoneResponse: 14:43:08.592 Overall Elapsed: 0:00:00.104 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] Portal SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 105 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-EGRESSPORT: 44444 X-RESPONSEBODYTRANSFERLENGTH: 105 X-CLIENTPORT: 44439 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#7 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:7132 == TIMING INFO ============ ClientConnected: 14:37:59.651 ClientBeginRequest: 14:38:01.397 GotRequestHeaders: 14:38:01.397 ClientDoneRequest: 14:38:01.397 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:37:57.880 FiddlerBeginRequest: 14:38:01.397 ServerGotRequest: 14:38:01.397 ServerBeginResponse: 14:38:01.464 GotResponseHeaders: 14:38:01.464 ServerDoneResponse: 14:38:01.464 ClientBeginResponse: 14:38:01.464 ClientDoneResponse: 14:38:01.464 Overall Elapsed: 0:00:00.067 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2]

    Read the article

  • Godaddy hosting with dynamic ip?

    - by Razor Storm
    I bought a domain name on godaddy.com let's call it www.website.com, but my server has a dynamic ip, how do I resolve this issue? What I've tried: get a dns account at dyndns.com let's call it dns.dyndns.com, so I set up godaddy to forward the domain to dns.dyndns.com. problem: When the user goes to www.website.com, they see dns.dyndns.com in the url bar. Bad So then I set up the forwarding on godaddy to forward with domain masking, but then the problem is now that users can't even see url paths/get queries/ or hash marks anymore! `www.website.com/folder/search?id=4#ajax=4` becomes just www.website.com in the url bar. bad! What do I do?

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • Experience formula with javascript

    - by StealingMana
    I'm having trouble working out a formula using this experience curve to get the total exp after each level. I bet its easy and im just over thinking it. maxlvl = 10; increment = 28; baseexp = 100; function calc(){ for (i = 0;i<(maxlvl*increment);i+=increment){ expperlvl = baseexp + i; document.writeln(expperlvl); } } I figured it out. maxlvl=6; base=200; increment=56; function total(){ totalxp= (base*(maxlvl-1))+(increment*(maxlvl-2)*(maxlvl-1)/2); document.write(totalxp); }

    Read the article

  • Debugging .NET 2.0 assembly from unmanaged code in VS2010?

    - by Rick Strahl
    I’ve run into a serious snag trying to debug a .NET 2.0 assembly that is called from unmanaged code in Visual Studio 2010. I maintain a host of components that using COM interop and custom .NET runtime hosting and ever since installing Visual Studio 2010 I’ve been utterly blocked by VS 2010’s inability to apparently debug .NET 2.0 assemblies when launching through unmanaged code. Here’s what I’m actually doing (simplified scenario to demonstrate): I have a .NET 2.0 assembly that is compiled for COM Interop Compile project with .NET 2.0 target and register for COM Interop Set a breakpoint in the .NET component in one of the class methods Instantiate the .NET component via COM interop and call method The result is that the COM call works fine but the debugger never triggers on the breakpoint. If I now take that same assembly and target it at .NET 4.0 without any other changes everything works as expected – the breakpoint set in the assembly project triggers just fine. The easy answer to this problem seems to be “Just switch to .NET 4.0” but unfortunately the application and the way the runtime is actually hosted has a few complications. Specifically the runtime hosting uses .NET 2.0 hosting and apparently the only reliable way to host the .NET 4.0 runtime is to use the new hosting APIs that are provided only with .NET 4.0 (which all by itself is lame, lame, lame as once again the promise of backwards compatibility is broken once again by .NET). So for the moment I need to continue using the .NET 2.0 hosting APIs due to application requirements. I’ve been searching high and low and experimenting back and forth, posted a few questions on the MSDN forums but haven’t gotten any hints on what might be causing the apparent failure of Visual Studio 2010 to debug my .NET 2.0 assembly properly when called from un-managed code. Incidentally debugging .NET 2.0 targeted assemblies works fine when running with a managed startup application – it seems the issue is specific to the unmanaged code starting up. My particular issue is with custom runtime hosting which at first I thought was the problem. But the same issue manifests when using COM Interop against a .NET 2.0 assembly, so the hosting is probably not the issue. Curious if anybody has any ideas on what could be causing the lack of debugging in this scenario?© Rick Strahl, West Wind Technologies, 2005-2010

    Read the article

  • ming 0.4.2 compilation errors on Ubuntu 12.04 when installing from source code

    - by gmuhammad
    I am trying to install ming 0.4.2 from source code and it was compilable before on Ubuntu 10.04, but now it' giving following compilation errors when I try to install using command sudo make install (libpng is already installed). /bin/bash ../libtool --tag=CC --mode=link gcc -g -O2 -Wall -DSWF_LITTLE_ENDIAN -o img2swf img2swf.o ../src/libming.la libtool: link: gcc -g -O2 -Wall -DSWF_LITTLE_ENDIAN -o .libs/img2swf img2swf.o ../src/.libs/libming.so gcc -DHAVE_CONFIG_H -I. -I../src -I../src -g -O2 -Wall -DSWF_LITTLE_ENDIAN -MT png2dbl.o -MD -MP -MF .deps/png2dbl.Tpo -c -o png2dbl.o png2dbl.c png2dbl.c: In function ‘readPNG’: png2dbl.c:64:8: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result [-Wunused-result] mv -f .deps/png2dbl.Tpo .deps/png2dbl.Po /bin/bash ../libtool --tag=CC --mode=link gcc -g -O2 -Wall -DSWF_LITTLE_ENDIAN -o png2dbl png2dbl.o ../src/libming.la libtool: link: gcc -g -O2 -Wall -DSWF_LITTLE_ENDIAN -o .libs/png2dbl png2dbl.o ../src/.libs/libming.so png2dbl.o: In function `readPNG': /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:69: undefined reference to `png_create_read_struct' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:74: undefined reference to `png_create_info_struct' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:82: undefined reference to `png_create_info_struct' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:97: undefined reference to `png_init_io' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:98: undefined reference to `png_set_sig_bytes' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:99: undefined reference to `png_read_info' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:101: undefined reference to `png_get_IHDR' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:127: undefined reference to `png_get_valid' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:156: undefined reference to `png_read_update_info' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:158: undefined reference to `png_get_IHDR' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:162: undefined reference to `png_get_channels' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:187: undefined reference to `png_get_rowbytes' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:194: undefined reference to `png_read_image' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:128: undefined reference to `png_set_expand' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:135: undefined reference to `png_set_strip_16' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:143: undefined reference to `png_set_gray_to_rgb' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:151: undefined reference to `png_set_filler' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:125: undefined reference to `png_set_packing' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:107: undefined reference to `png_get_valid' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:117: undefined reference to `png_get_PLTE' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:78: undefined reference to `png_destroy_read_struct' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:92: undefined reference to `png_destroy_read_struct' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:86: undefined reference to `png_destroy_read_struct' png2dbl.o: In function `writeDBL': /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:278: undefined reference to `floor' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:280: undefined reference to `compress2' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:278: undefined reference to `floor' /home/gmuhammad/Downloads/ming-0.4.2/util/png2dbl.c:280: undefined reference to `compress2' collect2: ld returned 1 exit status make[1]: *** [png2dbl] Error 1 make[1]: Leaving directory `/home/gmuhammad/Downloads/ming-0.4.2/util' make: *** [install-recursive] Error 1

    Read the article

  • Create an Asp.net Gridview with Checkbox in each row

    - by ybbest
    One of the frequent requirements for Asp.net Gridview is to add a checkbox for each row and a checkbox to select all the items like the Gridview below. This can be easily achieved by using jQuery. You can find the complete source doe here. $(document).ready(function () { $(‘input[name$="CDSelectAll"]‘).click(function () { if ($(this).attr(“checked”)) { $(‘input[name$="CDSelect"]‘).attr(‘checked’, ‘checked’); } else { $(‘input[name$="CDSelect"]‘).removeAttr(‘checked’); } }); });

    Read the article

  • How do I pause and resume apt-fast package download?

    - by jasoncruz98
    I know that in order to speed up apt-get downloads, I can use apt-fast (which uses the aria2c or axel engine - it depends on which one I install during the configuration). But even though it says it can pause and resume downloads, I don't know how to do it, and I can't find any answer online that tells me how to do it. I have no intention of pausing apt-fast update function, I just want the ability to pause the sudo apt-fast install package_name function and resume the downloading of a package in Ubuntu at will using apt-fast (with axel or aria2c) I have seen in some forums that sudo apt-fast update cannot be paused because it requires starting the entire process. Please correct me if I'm wrong. Any help would be much appreciated.

    Read the article

  • How to implement turn-based game engine?

    - by Dvole
    Let's imagine game like Heroes of Might and Magic, or Master of Orion, or your turn-based game of choice. What is the game logic behind making next turn? Are there any materials or books to read about the topic? To be specific, let's imagine game loop: void eventsHandler(); //something that responds to input void gameLogic(); //something that decides whats going to be output on the screen void render(); //this function outputs stuff on screen All those are getting called say 60 times a second. But how turn-based enters here? I might imagine that in gameLogic() there is a function like endTurn() that happens when a player clicks that button, but how do I handle it all? Need insights.

    Read the article

  • Python2.7 / Pip2.7 install in Centos6: root does not see /usr/local/bin

    - by Erotemic
    I am trying to install Python2.7 in Centos 6. It's a pain as centos6 ships with python26 and yum is dependent on it. Furthermore yum does not seem to have python2.7 I ended up building it from source: wget https://www.python.org/ftp/python/2.7.6/Python-2.7.6.tgz gunzip Python-2.7.6.tgz tar -xvf Python-2.7.6.tar cd Python-2.7.6 ./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib" make sudo make altinstall cd ~ This installed python2.7 to /usr/local/bin and I can use it. But I cannot call it with sudo unless I specify the whole pathname To install pip I had to do: wget https://bootstrap.pypa.io/get-pip.py sudo /usr/local/bin/python2.7 get-pip.py Now whenever I want a package I have to call sudo /usr/local/bin/pip2.7 install somepackage Is there a clean way to be able to run: sudo pip2.7 install somepackage without having to specify the absolute path? Is a symlink into /usr/bin safe?

    Read the article

  • Mapping tomcat apache worker

    - by metamorpheus
    I am running an Apache2 server connected with Tomcat5.5 Workers.properties workers.tomcat_home=/usr/share/tomcat5.5 workers.java_home=/usr/lib/jvm/java-6-sun ps=/ worker.list=worker1 worker.worker1.port=8009 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 worker.worker1.lbfactor=1 The JkMount is defined as follows LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel debug JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkMount /jsp-examples worker1 JkMount /jsp-examples/* worker1 JkMount /servlets-examples worker1 JkMount /servlets-examples/* worker1 JkMount /tcontainer worker1 JkMount /tcontainer/* worker1 If i call 127.0.0.1/servlets-examples, i get the examples displayed and executed correctly. If i call [same server as above]/tcontainer, i get the following error: The requested resource (/tcontainer) is not available. (this is an error provided by tomcat5.5) How can i define where to get the sources? i have a configuration file in /usr/share/tomcat-5.5-webapps/tcontainer.xml: <Context path="/tcontainer" docBase="/var/www/web96/html/tcontainer" debug="0" privileged="true" allowLinking="true"> </Context> What did i forget to configure or what is wrong with my definitions? Thanks

    Read the article

  • How to detect collisions in AS3?

    - by Gabriel Meono
    I'm trying to make a simple game, when the ball falls into certain block, you win. Mechanics: The ball falls through several obstacles, in the end there are two blocks, if the ball touches the left block you win, the next level will contain more blocks and less space between them. Test the movie (click on the screen to drop the ball): http://gabrielmeono.com/downloads/Lucky_Hit_Alpha.swf These are the main variables: var winBox:QuickObject;//You win var looseBox:QuickObject;//You loose var gameBall:QuickObject;//Ball dropped Question: How do I trigger a collision function if the ball hits the winBox? (Win message/Next level) Thanks, here is the full code: package { import flash.display.MovieClip; import com.actionsnippet.qbox.*; import flash.events.MouseEvent; [SWF(width = 600, height = 600, frameRate = 60)] public class LuckyHit extends MovieClip { public var sim:QuickBox2D; var winBox:QuickObject; var looseBox:QuickObject; var gameBall:QuickObject; /** * Constructor */ public function LuckyHit() { sim = new QuickBox2D(this); //sim.createStageWalls(); winBox = sim.addBox({x:5,y:600/30, width:300/30, height:10/30, density:0}); looseBox = sim.addBox({x:15,y:600/30, width:300/30, height:10/30, density:0}); // make obstacles for (var i:int = 0; i<(stage.stageWidth/50); i++){ //End sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid End sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); //Middle Start sim.addCircle({x:0 + i * 1.5, y:09, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:08, radius:0.1, density:0}); sim.addCircle({x:0 + i * 1.5, y:07, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:06, radius:0.1, density:0}); } sim.start(); stage.addEventListener(MouseEvent.CLICK, _clicked); } /** * .. * @param e MouseEvent.CLICK */ private function _clicked(e:MouseEvent) { gameBall = sim.addCircle({x:(mouseX/30), y:(1), radius:0.25, density:5}); } } }

    Read the article

  • Does anyone provide a Skype connection service?

    - by Runc
    Is there a way of offering Skype access to incoming calls while keeping all telephony traffic over our chosen business telephony provision? Is there a 3rd party who can route incoming Skype calls to our telephone system? The business has had requests from contacts wanting to call us via Skype, but we want to keep all telephony via our PBX and phone lines as our geographic location limits our available internet bandwidth. We also prevent installation of non-standard applications on desktops and do not want to add Skype to our build. I was wondering if there were any 3rd parties that provide a connection service that would allow our contacts to call via Skype and us receive the calls via our phone system.

    Read the article

  • Why is TDD not working here?

    - by TobiMcNamobi
    I want to write a class A that has a method calculate(<params>). That method should calculate a value using database data. So I wrote a class Test_A for unit testing (TDD). The database access is done using another class which I have mocked with a class, let's call it Accessor_Mockup. Now, the TDD cycle requires me to add a test that fails and make the simplest changes to A so that the test passes. So I add data to Accessor_Mockup and call A.calculate with appropriate parameters. But why should A use the accessor class at all? It would be simpler (!) if the class just "knows" the values it could retrieve from the database. For every test I write I could introduce such a new value (or an if-branch or whatever). But wait ... TDD is more. There is the refactoring part. But that sounds to me like "OK, I can do this all with a big if-elseif construct. I could refactor it using a new class ... but instead I make use of the DB accessor and do this in a totally different way. The code will not necessarily look better afterwards but I know I WANT to use the database".

    Read the article

  • Multi MVC processing vs Single MVC process

    - by lordg
    I've worked fairly extensively with the MVC framework cakephp, however I'm finding that I would rather have my pages driven by the multiple MVC than by just one MVC. My reason is primarily to maintain an a more DRY principle. In CakePHP MVC: you call a URL which calls a single MVC, which then calls the layout. What I want is: you call a URL, it processes a layout, which then calls multiple MVC's per component/block of html on the page. When you compare JavaScript components, AJAX, and server side HTML rendering, it seems the most consistent method for building pages is through blocks of components or HTML views. That way, the view block could be situated either on the server or the client. This is technically my ONLY disagreement with the MVC model. Outside of this, IMHO MVC rocks! My question is: What other RAD frameworks follow the same principles as MVC but are driven rather by the View side of MVC? I've looked at Django and Ruby on Rails, yet they seems to be more Controller driven. Lift/Scala appears to be somewhat of a good fit, but i'm interested to see what others exist.

    Read the article

  • JavaScript objects and Crockford's The Good Parts

    - by Jonathan
    I've been thinking quite a bit about how to do OOP in JS, especially when it comes to encapsulation and inheritance, recently. According to Crockford, classical is harmful because of new(), and both prototypal and classical are limited because their use of constructor.prototype means you can't use closures for encapsulation. Recently, I've considered the following couple of points about encapsulation: Encapsulation kills performance. It makes you add functions to EACH member object rather than to the prototype, because each object's methods have different closures (each object has different private members). Encapsulation forces the ugly "var that = this" workaround, to get private helper functions to have access to the instance they're attached to. Either that or make sure you call them with privateFunction.apply(this) everytime. Are there workarounds for either of two issues I mentioned? if not, do you still consider encapsulation to be worth it? Sidenote: The functional pattern Crockford describes doesn't even let you add public methods that only touch public members, since it completely forgoes the use of new() and constructor.prototype. Wouldn't a hybrid approach where you use classical inheritance and new(), but also call Super.apply(this, arguments) to initialize private members and privileged methods, be superior?

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Troubleshooting VC++ DLL in VB.Net

    - by Jolyon
    I'm trying to make a solution in Visual Studio that consists of a VC++ DLL and a VB.Net application. To figure this out, I created a VC++ Class Library project, with the following code (I removed all the junk the wizard creates): mathfuncs.cpp: #include "MathFuncs.h" namespace MathFuncs { double MyMathFuncs::Add(double a, double b) { return a + b; } } mathfuncs.h: using namespace System; namespace MathFuncs { public ref class MyMathFuncs { public: static double Add(double a, double b); }; } This compiles quite happily. I can then add a VC++ console project to the solution, add a reference to the original project for this new project, and call it as follows: test.cpp: using namespace System; int main(array<System::String ^> ^args) { double a = 7.4; int b = 99; Console::WriteLine("a + b = {0}", MathFuncs::MyMathFuncs::Add(a, b)); return 0; } This works just fine, and will build to test.exe and mathsfuncs.dll. However, I want to use a VB.Net project to call the DLL. To do this, I add a VB.Net project to the solution, make it the startup project, and add a reference to the original project. Then, I attempt to use it as follows: MsgBox(MathFuncs.MyMathFuncs.Add(1, 2)) However, when I run this code, it gives me an error: "Could not load file or assembly 'MathFuncsAssembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format." Do I need to expose the method somehow? I'm using Visual Studio 2008 Professional.

    Read the article

  • Designing application flow

    - by Umesh Awasthi
    I am creating a web application in java where I need to mock the following flow. When user trigger a certain process (add product to cart), I need to pass through following steps Need to see in HTTP Session if user is logged in. Check HTTP Session if shopping cart is there If user exist in HTTP Session and his/her cart do not exist in HTTP Session Get user cart from the database. add item to cart and save it to HTTP session and update cart in DB. If cart does not exist in DB, create new cart and and save in it HTTP Session. Though I missed a lot of use cases here (do not want question length to increase a lot), but most of the flow will be same as I described in above steps. My flow will start from the Controller and will go towards Service Layer and than ends up in the DAO layer. Since there will be a lot of use cases where I need to check HTTP session and based on that need to call Service layer, I was planning to add a Facade layer which should be responsible to do this for me like checking Session and interacting with Service layer. Please suggest if this is a valid approach or any other best approach can be implemented here? One more point where I am confused is how to handle HTTP session in facade layer? do I need to pass HTTP session object each time I call my Facade or any other approach can be used here?

    Read the article

  • Raw socket sendto() failure in OS X

    - by user37278
    When I open a raw socket is OS X, construct my own udp packet (headers and data), and call sendto(), I get the error "Invalid Argument". Here is a sample program "rawudp.c" from the web site http://www.tenouk.com/Module43a.html that demonstrates this problem. The program (after adding string and stdlib #includes) runs under Fedora 10 but fails with "Invalid Argument" under OS X. Can anyone suggest why this fails in OS X? I have looked and looked and looked at the sendto() call, but all the parameters look good. I'm running the code as root, etc. Is there perhaps a kernel setting that prevents even uid 0 executables from sending packets through raw sockets in OS X Snow Leopard? Thanks.

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >