Search Results

Search found 19480 results on 780 pages for 'do your own homework'.

Page 637/780 | < Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >

  • Wondering where to begin

    - by Cat
    Hello all. After being interested for years and years (and years), I have finally decided to start learning how to create software and web applications. Base on recommendations, I have started with learning the basics of web design first (which I am almost done with) and then will move on to the meat of my process: learning the languages. Problem is, I don't know where to start :/ PHP, Ruby, Perl...and where would SQL, JavaScript and .NET fit into the mix? I am assuming they build on each other/play off of each other somewhat so following some sort of 'order' will make the process more logical and digestible. You're probably thinking, "Just go to school for computer engineering, duh!" But I already have a degree and don't plan on going back to school. I believe I have an adequate aptitude for this sort of thing, and although it will be challenging, with the support of the community I know I can do it on my own. Thank in advance everyone and I am very sorry for the length. I look forward to hearing what all you have to say. Warm Regards, Cat

    Read the article

  • Architecture for new ASP.NET web application

    - by Anders Abel
    I'm maintaining an application which currently is just a web service (built with WCF) and a database backend. The web service is built in layers with a linq-to-sql data access part with core functionality in an own assembly and on top of that the web service assembly which contains the WCF code. The core assembly also handles all business logic rules (very few actually). The customer now wants a Web interface for the application instead of just accessing it through other applications which are consuming the web service. I'm quite lost on modern web application design, so I would like some advice on what architecture and frameworks to use for the web application. The web application will be using the same core assembly with business rules and the linq-to-sql data access layer as the web service. Some concepts I've thought about are: ASP.NET MVC Webforms AJAX controls - possibly leting the AJAX controls access the existing web service through JSON. Are there any more concepts I should look into? Which one is the best for a fresh project? The development tools are Visual Studio 2008 Team Edition for Developers targeting .NET 3.5. An upgrade to Visual Studio 2010 Premium (or maybe even Ultimate) is possible if it gives any benefits.

    Read the article

  • Setting processor affinity on CSC.exe launched by CoreCompile MSBuild Task

    - by Hardy
    I am wondering if there is simple way to ensure that when a c# project is compiled the CSC.exe launched inherits the parent processor affinity settings, or perhaps of a way where by i can supply this. I have been trying to accomplish this by launching a bat file from vs.net cmd prompt like start /affinity 01 custombuild.cmd and inside my custombuild.cmd i have @echo off msbuild Libraries.sln /t:rebuild /p:Configuration=Release;platform=x64 /m:1 :END The command line call to Csc.exe this generates looks like the following C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. What i 'd like to see is the CSC.exe to inherit the processor affinity or a simple way to be able to override how csc.exe call is generated so i can make it into a start /affinity 01 C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. I also noticed that CoreCompile target is defined in Microsoft.CSharp.targets, should i be considering overriding MSBuildToolsPath variable so i can sneak in my own version. This feels rather hacky. Any help would be much appreciated.

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • prevent javascript in the WMD editor's preview box

    - by Justin Grant
    There are many SO questions (e.g. here and here) about how to do server-side scrubbing of Markdown produced by the WMD editor to ensure the HTML generated doesn't contain malicious script, like this: <img onload="alert('haha');" src="http://www.google.com/intl/en_ALL/images/srpr/logo1w.png" /> Unfortunately, this still allows script to show up in the WMD client's preview box. I doubt this is a big deal since if you're scrubbing the HTML on the server, an attacker can't save the bad HTML so no one else will be able to see it later and have their cookies stolen or sessions hijacked by the bad script. But it's still kinda odd to allow an attacker to run any script in the context of your site, and it's probably a bad idea to allow the client preview window to allow different HTML than your server will allow. StackOverflow has clearly plugged this hole. How did they do it? [NOTE: I already figured this out but it required some tricky javascript debugging, so I'm answering my own question here to help others who may want to do ths same thing]

    Read the article

  • Using unset member variables within a class or struct

    - by Doug Kavendek
    It's pretty nice to catch some really obvious errors when using unset local variables or when accessing a class or struct's members directly prior to initializing them. In visual studio 2008 you get an "uninitialized local variable used" warning at compile-time and get a run-time check failure at the point of access when debugging. However, if you access an uninitialized struct's member variable through one of its functions, you don't get any warnings or assertions. Obviously the easiest solution is don't do that, but nobody's perfect. For example: struct Test { float GetMember() const { return member; } float member; }; Test test; float f1 = test.member; // Raises warning, asserts in VS debugger at runtime float f2 = test.GetMember(); // No problem, just keeps on going This surprised me, but it makes some sense -- the compiler can't assume calling a function on an unused struct is an error, or how else would you initialize or construct it? And anything fancier just quickly brings up so many other complications that it makes sense that it wouldn't bother classifying which functions are ok to call and when, especially just as a debugging help. I know I can set up my own assertions or error checking within the class itself, but that can complicate some simpler structs. Still, it would seem like within the context of the function call, wouldn't it know insides GetMember() that member wasn't initialized yet? I'm assuming it's not only relying on static compile-time deduction, given the Run-Time Check Failure #3 it raises during execution, so based on my current understanding of it it would seem reasonable for the same checks to apply. Is this just a limitation of this specific compiler/debugger (Visual Studio 2008), or more tied to how C++ works?

    Read the article

  • PDO Database Connections Problem

    - by Metropolis
    Hey Everyone, Over a year ago I created my own database classes which use PDO, and handle all preparing, executing, and closing connections. These classes have been working great up until now. There are two different database severs I am grabbing from, MySQL, and MS SQL Express. I am retrieving an employee id from the MySQL server and using it to get that employees information from the MS SQL server. There are about 11k records coming from the MySQL server and my program is only making it through 1200 before crashing with an error like the following. Connection failed (odbc:Driver=FreeTDS;Servername=MSSQLExpress;Database=SMDINC) Class (PDOException) SQLSTATE[08001] SQLDriverConnect: 0 [unixODBC][FreeTDS][SQL Server]Unable to connect to data source It seems like the program is not able to connect to the data source, but it is running the exact same query about 30 times before this and having no problem. Also, I have thoroughly checked all of the data coming into the query and it all looks fine. I believe the issue may be that there are to many connections being created, but I have tried to close all connections in many different places, and nothing seems to be fixing the problem. Any debugging help, or suggestions would be appreciated! Craig Metrolis

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • Wireshark Dissector: How to Identify Missing UDP Frames?

    - by John Dibling
    How do you identify missing UDP frames in a custom Wireshark dissector? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: I tried adding code to my dissector that simply saves the last-processed sequence number (as a local static), and flags gaps when the dissector processes a frame where current_sequence != (previous_sequence + 1). This did not work because the dissector can be called in random-access order, depending on where you click in the GUI. So you could process frame 10, then frame 15, then frame 11, etc. Is there any way for my dissector to know if the frame that came before it (or the frame that follows) is missing? The dissector is written in C. (See also a companion post on serverfault.com)

    Read the article

  • Getting a lightweight installation of java eclipse.

    - by liam
    Having dealt with yet another stupid eclipse problem, I want to try to get the lightest, most minimal eclipse installation as possible. To be clear, I use eclipse for two things: - Editing Java - Debugging Java Everything else I do through emacs/zsh (editing jsp/xml/js, file management, svn check-in, etc). I have not found any aspect of working in eclipse to do these tasks to be efficient or even reliable, so I do not want plug-ins that relate to it. From the eclipse.org site, this is the lightest install of eclipse that they have, and I don't want any of those things (bugzilla, mylyn, cvs, xml_ui), and have actually had problems with each of them even though I do not use them. So what is the minimal build I can get that will: 1) Ignore svn metadata 2) Includes the full-featured editor (intellisense and type-finding) 3) Includes the full-featured debugger (standard eclipse/jdk) Does not have any extra plug-ins, platforms, or "integrations" with other platforms, specifically, I don't want to deal with plug-ins relating to: Maven, JSP Validation, Javascript editing or validation, CVS or SVN, Mylyn, Spring or Hibernate "natures", app servers like a bundled tomcat/glassfish/etc, J2EE tools, or anything of the like. I do primarily spring/hibernate/web-mvc apps, and have never dealt with an eclipse plug-in that handles any of it gracefully, I can work effectively with my own toolset, but eclipse extensions do nothing but get in the way. I have worked with plain eclipse up to Ganymede, MyEclipse (up to 7.5), and the latest version of Spring-SourceTools, and find that they are all saddled with buggy useless plug-ins (though the combination is always different). Switching to netbeans/intellij is not an option, and my teammates work with svn-controlled .class/.project files, so it pretty much has to be eclipse. Does anyone have any good advice on how I can save a few grey hairs?

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • pluto or jetspeed on google app engine?

    - by Patrick Cornelissen
    I am trying to build something "portlet server"-ish on the google app engine. (as open source) I'd like to use the JSR168/286 standards, but I think that the restrictions of the app engine will make it somewhere between tricky and impossible. Has anyone tried to run jetspeed or an application that uses pluto internally on the google app engine? Based on my current knowledge of portlets and the google app engine I'm anticipating these problems: A war file with portlets is from the deployment standpoint more or less a complete webapp (yes, I know that it doesn't really work without a portal server). The war file may contain it's own web.xml etc. This makes deployment on the app engine rather difficult, because the apps are not visible to each other, so all portlet containing archives need to be included in the war file of the deployed "app engine based portal server". The "portlets" are (at least in liferay) started as permanent servlet processes, based on their portlet.xmls and web.xmls which is located in the same spot for every portlet archive that is loaded. I think this may be problematic in the app engine, because everything is in one big "web app", so it may be tricky to access the portlet.xmls from each archive. This prevents a 100% compatibility in my opinion. Is here anyone who has any experience with the combination of portlets and the app engine? Do you think it's feasible to modify jetspeed, pluto or any other portlet container to be able to run it on the app engine?

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Problems compiliing c++ code using cygwin

    - by user343403
    I am trying to compile some source code in cygwin (in windows 7) and get the following error when I run the make file g++ -DHAVE_CONFIG_H -I. -I.. -I.. -Wall -Wextra -Werror -g -O2 -MT libcommon_a Fcntl.o -MD -MP -MF .deps/libcommon_a-Fcntl.Tpo -c -o libcommon_a-Fcntl.o `test -f 'Fcntl.cpp' || echo './'`Fcntl.cpp Fcntl.cpp: In function int setCloexec(int): Fcntl.cpp:8: error: 'F_GETFD' was not declared in this scope Fcntl.cpp:8: error: 'fcntl' was not declared in this scope Fcntl.cpp:11: error: 'FD_CLOEXEC' was not declared in this scope Fcntl.cpp:12: error: 'F_SETFD' was not declared in this scope make[4]: *** [libcommon_a-Fcntl.o] Error 1 make[4]: Leaving directory `/abyss-1.1.2/Common' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/abyss-1.1.2' make[2]: *** [all] Error 2 make[2]: Leaving directory `/abyss-1.1.2' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/cygdrive/c/Users/Martin/Documents/NetBeansProjects/abyss-1.1.2_1' make: *** [.build-impl] Error 2 The problem file is:- #include "Fcntl.h" #include <fcntl.h> /* Set the FD_CLOEXEC flag of the specified file descriptor. */ int setCloexec(int fd) { int flags = fcntl(fd, F_GETFD, 0); if (flags == -1) return -1; flags |= FD_CLOEXEC; return fcntl(fd, F_SETFD, flags); } I don't understand what is going on, the file fcntl.h is available and the varaiables that it says were not declared in this scope do not give an error when I compile the file on its own Any help would be much appreciated Many Thanks

    Read the article

  • Yet another "What is this code doing"-type of Perl code

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

  • firefox lead dot in cookie issue

    - by Jon
    Hi all, We are having an annoying issue with Firefox and cookies. We have the following domains: sub1.mydomain.com sub2.mydomain.com sub3.mydomain.com otherdomain.com We have converting our framework to be multilingual and providing a drop down to change the language at any point during site. The code base is shared across all the domains above. We can not set a cookie across all "mydomain.com" sites, they have to be on each of the sub domains. To get this to work we set a JavaScript cookie when the users chooses a new language. When the page posts back to the server the code picks this up and sets the users preferences to that new language code, (this is all C# and ASP.NET). We have to set the host to be "subX.mydomain.com" and the path to be "/" in the cookie so that it is just for the subdomain and all parts of that domain. This works great on all browsers apart from FireFox. It seems that firefox will pre append a DOT to the beginning of domain so ".subX.mydomain.com". When the code posts back with FireFox the cookie is always null. Has anyone had this situation, (I imagine it is not al that uncommon). I have read a lot of people saying, remove the domain from the cookie, but that can not work for us as we have multiple subdomains that need their own cookie values. Thanks

    Read the article

  • Stubbing an ActsAs Rails Plugin

    - by Rabbott
    I need to create a plugin much like Authlogic (or even just add on to Authlogic), but due to requirements beyond my control I need my plugin to authenticate using SOAP. Basically the plugin would require that anyone accessing the controller (before_filter would be fine) would have to authenticate first. I have ZERO control over the login page, or the SOAP server, I am simply a client attempting to authenticate to the providers SOAP Web Service. Here is what happens.. before_filter realizes that no session[:credential] is set, and forwards the user to the url on the providers servers. The user enters their credentials, and once authenticated, the web service forwards the user to a URL that has been entered by their sysadmins, attaching a token to the url on its way back. I need to take that token, append it to some parameters stored in a local YAML file, and make the SOAP call to the providers server. If all goes as planned, I need to set session[:credential] to the result of the SOAP call, and forward the user to the root page. Subsequent calls to the before_filter will not make the SOAP call, because session[:credential] is set. Ideally I think this would be awesome to slap on top of Authlogic, but I'm not sure how to do this, So I started to create my own acts_as_soap_authentic plugin, which isn't causing errors, but doesn't do anything.. Anyone have any pointers, or tips as to how I can get the ball rolling here? It seems simple, but is proving not to be..

    Read the article

  • "string" != "string"

    - by Misiur
    Hi. I'm doing some kind of own templates system. I want to change <title>{site('title')}</title> Into function "site" execution with parameter "title". Here's private function replaceFunc($subject) { foreach($this->func as $t) { $args = explode(", ", preg_replace('/\{'.$t.'\(\'([a-zA-Z,]+)\'\)\}/', '$1', $subject)); $subject = preg_replace('/\{'.$t.'\([a-zA-Z,\']+\)\}/', call_user_func_array($t, $args), $subject); } return $subject; } Here's site: function site($what) { global $db; $s = $db->askSingle("SELECT * FROM ".DB_PREFIX."config"); switch($what) { case 'title': return 'Title of page'; break; case 'version': return $s->version; break; case 'themeDir': return 'lolmao'; break; default: return false; } } I've tried to compare $what (which is for this case "title") with "title". MD5 are different. strcmp gives -1, "==", and "===" return false. What is wrong? ($what type is string. You can't change call_user_func_array into call_user_func, because later I'll be using multiple arguments)

    Read the article

  • ASP.NET Repeater, Dynamically starting new table row

    - by dherrin79
    I have the following repeater: <table> <asp:Repeater runat="server" ID="rptBrandRepeater"> <ItemTemplate> <tr> <td> <asp:HyperLink runat="server" ID="lnkCompanyLink"> <asp:Image runat="server" ID="imgCompanyLogo" /> </asp:HyperLink> </td> </tr> </ItemTemplate> </asp:Repeater> </table> I want to start a new row every four table cells. I don't want to used jQuery or Javascript to accomplish this. The outputted html is supposed to look like this page: http://rmtequipment.com/golfandturf.aspx I have made an interface that will allow them to add these logos on their own. So this page will be dynamically built. What is the best way to accomplish this goal? If a listview or gridview is a better approach I am open to that as well. Thanks in advance.

    Read the article

  • When to use certain optimizations such as -fwhole-program and -fprofile-generate with several shared libraries

    - by James
    Probably a simple answer; I get quite confused with the language used in the GCC documentation for some of these flags! Anyway, I have three libraries and a programme which uses all these three. I compile each of my libraries seperately with individual (potentially) different sets of warning flags. However, I compile all three libraries with the same set of optimisation flags. I then compile my main programme linking in these three libraries with its own set of warning flags and the same optimisation flags used during the libraries' compilation. 1) Do I have to compile the libraries with optimisation flags present or can I just use these flags when compiling the final programme and linking to the libraries? If the latter, will it then optimise all or just some (presumably that which is called) of the code in these libraries? 2) I would like to use -fwhole-program -flto -fuse-linker-plugin and the linker plugin gold. At which stage do I compile with these on ... just the final compilation or do these flags need to be present during the compilation of the libraries? 3) Pretty much the same as 2) however with, -fprofile-generate -fprofile-arcs and -fprofile-use. I understand one first runs a programme with generate, and then with use. However, do I have to compile each of the libraries with generate/use etc. or just the final programme? And if it is just the last programme, when I then compeil with -fprofile-use will it also optimise the libraries functionality? Many thanks, James

    Read the article

  • How to communicate between frames?

    - by bangoker
    I'm maintaining an application that goes sort of like this: There is a Page A with a Frame that shows Page B. Now page B is part of a completely different product, so there's a frame in a that just calls B. Now, they want that when B an option in B is clicked, the WHOLE page is redirected to another page in A. The problem is that the url of A is something like "www.client.MyCompany/Order/Details/123" But B doesnt know nothing about A, or which order # it is or anything, but Page A who has the frame B does know it. For know my solution is to just redirect to all the order so something like client.MyCompany/Orders but since B doesn't know which client it is, I'll add it in the webconfig. (so each client has its own webconfig with a different value). I dont find this solution optimal but I can't think of anything else! I already tried putting the needed url in page A in a hidden Div (since A does know all the info) and then trying to read the whole DOM of the page from B to find it.... unfortunately I can only get access to Frame B's DOM... (I tried with jquery). I know frames are evil, but this is how it is written... any ideas? Thanks!

    Read the article

  • Go for Zend framework or Django for a modular web application?

    - by dr. squid
    I am using both Zend framework and Django, and they both have they strengths and weakness, but they are both good framworks in their own way. I do want to create a highly modular web application, like this example: modules: Admin cms articles sections ... ... ... I also want all modules to be self contained with all confid and template files. I have been looking into a way to solve this is zend the last days, but adding one omer level to the module setup doesn't feel right. I am sure this could be done, but should I? I have also included Doctrine to my zend application that could give me even more problems in my module setup! When we are talking about Django this is easy to implement (Easy as in concept, not in implementation time or whatever) and a great way to create web apps. But one of the downsides of Django is the web hosing part. There are some web hosts offering Django support, but not that many.. So then I guess the question is what have the most value; rapid modular development versus hosting options! Well, comments are welcome! Thanks

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • NSTask executed only once

    - by Eimantas
    I'm having trouble executing different NSTask's. Same launchPath, different arguments. I have a class who's instances administer own NSTask objects and depending on arguments those instances were initialized with - dependent NSTask object is being created. I have two initializers: // Method for finished task - (void)taskFinished:(NSNotification *)aNotification { [myTask release]; myTask = nil; [self createTask]; } // Designated initializer - (id) init { self = [super init]; if (self != nil) { [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(taskFinished:) name:NSTaskDidTerminateNotification object:nil]; [self createTask]; } return self; } // Convenience initializer - (id)initWithCommand:(NSString *)subCommand { self = [self init]; if (self) { [self setCommand:subCommand]; } return self; } And here 's the createTask method: - (void)createTask { // myTask is a property defined as NSTask* myTask = [[NSTask alloc] init]; [myTask setLaunchPath:@"/usr/bin/executable"]; } Say I have 3 buttons. Each one creates different class instance with different NSTask objects. But problem is that only first one gets executed. The second ones does not even triger "click" event (via target-action). I think it could be cause of launchPath I'm trying to use, 'cause simple /bin/ls works fine. The same command in terminal has 0 return value (i.e. all is fine). Any guides or gotchas are much appreciated.

    Read the article

< Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >