Search Results

Search found 8705 results on 349 pages for 'perl scripts'.

Page 317/349 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • Checking whether images loaded after page load

    - by johkar
    Determining whether an image has loaded reliably seems to be one of the great JavaScript mysteries. I have tried various scripts/script libraries which check the onload and onerror events but I have had mixed and unreliable results. Can I reliably just check the complete property (IE 6-8 and Firefox) as I have done in the script below? I simply have a page wich lists out servers and I link to an on.gif on each server. If it doesn't load I just want to load an off.gif instead. This is just for internal use...I just need it to be reliable in showing the status!!! <script type="text/javascript"> var allimgs = document.getElementsByTagName('img'); function checkImages(){ for (i = 0; i < allimgs.length; i++){ var result = Math.random(); allimgs[i].src = allimgs[i].src + '?' + result; } serverDown(); setInterval('serverDown()',5000); } window.onload=checkImages; function serverDown(){ for (i = 0; i < allimgs.length; i++){ var imgholder=new Image(); imgholder.src=allimgs[i].src; if(!allimgs[i].complete){ allimgs[i].src='off.gif'; } } } </script>

    Read the article

  • "One or more breakpoints cannot be set and have been disabled. Execution will stop at the beginning

    - by sam
    I set a breakpoint in my code in Visual-C++, but when I run, I see the error mentioned in the title. I know this question has been asked before on Stack Overflow (http://stackoverflow.com/questions/657470/breakpoints-cannot-be-set-and-have-been-disabled-problem), but none of the answers there fully explained the problem I'm seeing. The closest I can see is something about the linker, but I don't understand that - so if someone could explain in more detail that would be great. In my case, I have 2 projects in Visual C++ - the production dsw, and the test code dsw. I have loaded and rebuilt both dsws in debug mode. I want a breakpoint in the production code, which is run via the test scripts. My issue is I get the error message when I run the test code, because the break point is in the production code, which isn't loaded up when the test starts. Near the beginning of the test script there is a mytest_initialize() command. I imagine this goes off and loads up the production dll. Once this line has executed, I can put the breakpoint in my production code and run until I hit it. But it's quite annoying to have to run to this line, set the breakpoint and continue every time I want to run the test. So I think the problem is Visual C++ doesn't realise the two projects are related. Is this a linker issue? What does the linker do and what settings should I change to make this work? Thanks in advance. Apologies if instead I should be appending this question to the existing one, this is my first post so not quite sure how this should work.

    Read the article

  • Create an instance of an exported C++ class from Delphi

    - by Alan G.
    I followed an excellent article by Rudy Velthuis about using C++ classes in DLL's. Everything was golden, except that I need access to some classes that do not have corresponding factories in the C++ DLL. How can I construct an instance of a class in the DLL? The classes in question are defined as class __declspec(dllexport) exampleClass { public: void foo(); }; Now without a factory, I have no clear way of instantiating the class, but I know it can be done, as I have seen SWIG scripts (.i files) that make these classes available to Python. If Python&SWIG can do it, then I presume/hope there is some way to make it happen in Delphi too. Now I don't know much about SWIG, but it seems like it generates some sort of map for C++ mangled names? Is that anywhere near right? Looking at the exports from the DLL, I suppose I could access functions & constructor/destructor by index or the mangled name directly, but that would be nasty; and would it even work? Even if I can call the constructor, how can I do the equivalent of "new CClass();" in Delphi?

    Read the article

  • How to add deploy.jar to classpath?

    - by dma_k
    I am facing the problem: I need to add ${java.home}/lib/deploy.jar JAR file to classpath in the runtime (dynamically from java). The solution with Thread#setContextClassLoader(ClassLoader) (mentioned here) does not work because of this bug (if somebody can explain what is really a problem – you are welcome). The solution with -Xbootclasspath/a:"%JAVA_HOME%/jre/lib/deploy.jar" does not work well for me, because I want to have "pure executable jar" as a deliverable: no wrapping scripts please (more over %JAVA_HOME% may not be defined in user's environment in Windows for example, plus I need to write a script per platform) The solution with merging deploy.jar file into my deliverable works only if I make a build on Windows platform. Unfortunately, when the deliverable is produced on build server running on Linux, I got Linux-dependant JAR, which does not execute on Windows – it fails with the trace below. I have read How the Java Launcher Finds Classes and Java programming dynamics: Java classes and class loading articles but I've got no extra ideas, how to correctly handle this situation. Any advices or solutions are very welcomed. Trace: java.lang.NoClassDefFoundError: Could not initialize class com.sun.deploy.config.Config at com.sun.deploy.net.proxy.UserDefinedProxyConfig.getBrowserProxyInfo(UserDefinedProxyConfig.java:43) at com.sun.deploy.net.proxy.DynamicProxyManager.reset(DynamicProxyManager.java:235) at com.sun.deploy.net.proxy.DeployProxySelector.reset(DeployProxySelector.java:59) ... java.lang.NullPointerException at com.sun.deploy.net.proxy.DynamicProxyManager.getProxyList(DynamicProxyManager.java:63) at com.sun.deploy.net.proxy.DeployProxySelector.select(DeployProxySelector.java:166)

    Read the article

  • JoinColumn name not used in sql

    - by Vladimir
    Hi! I have a problem with mapping many-to-one relationship without exact foreign key constraint set in database. I use OpenJPA implementation with MySql database, but the problem is with generated sql scripts for insert and select statements. I have LegalEntity table which contains RootId column (among others). I also have Address table which has LegalEntityId column which is not nullable, and which should contain values referencing LegalEntity's "RootId" column, but without any database constraint (foreign key) set. Address entity is mapped: @Entity @Table(name="address") public class Address implements Serializable { ... @ManyToOne(fetch=FetchType.LAZY, optional=false) @JoinColumn(referencedColumnName="RootId", name="LegalEntityId", nullable=false, insertable=true, updatable=true, table="LegalEntity") public LegalEntity getLegalEntity() { return this.legalEntity; } } SELECT statement (when fetching LegalEntity's addresses) and INSERT statment are generated: SELECT t0.Id, .., t0.LEGALENTITY_ID FROM address t0 WHERE t0.LEGALENTITY_ID = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (..., LEGALENTITY_ID) VALUES (..., ?) [params=..., (int) 2] If I omit table attribute from mentioned statements are generated: SELECT t0.Id, ... FROM address t0 INNER JOIN legalentity t1 ON t0.LegalEntityId = t1.RootId WHERE t1.Id = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (...) VALUES (...) [params=...] So, LegalEntityId is not included in any of the statements. Is it possible to have relationship based on such referencing (to column other than primary key, without foreign key in database)? Is there something else missing? Thanks in advance.

    Read the article

  • Google AJAX Transliteration API :- How do i translate many elements in page to some language at one

    - by Nitesh Panchal
    Hello, I have many elements on page and all of which i want to translate to some language. The language is not the same for all fields, that is, for 1st field it may be fr and for third field it may be en then again for 7th field it may be pa. Basically i wrote the code and it's working :- <script type="text/javascript"> //<![CDATA[ google.load("language", "1"); window.onload = function(){ var elemPostTitles = document.getElementsByTagName("h4"); var flag = true; for(var i = 0 ; i < elemPostTitles.length ; i++){ while(flag == false){ } var postTitleElem = elemPostTitles[i]; var postContentElem = document.getElementById("postContent_" + i); var postTitle = postTitleElem.innerHTML; var postContent = postContentElem.innerHTML; var languageCode = document.getElementById("languageCode_" + i).value; google.language.detect(postTitle, function(result) { if (!result.error && result.language) { google.language.translate(postTitle, result.language, languageCode, function(result) { flag = true; if (result.translation) { postTitleElem.innerHTML = result.translation; } }); } }); flag = false; } As you can see, what i am trying to do is restrict the loop from traversing until the result of previous ajax call is receieved. If i don't do this only the last field gets translated. My code works nicely, but because of the infinite loop, i keep getting errors from Mozilla to "stop executing scripts". How do i get rid of this? Also, is my approach correct? Or some inbuilt function is available which can ease my task? Thanks in advance :)

    Read the article

  • WordPress: Related Posts by Tags, but in the Same Categories

    - by Norbert
    I'm using the script below to get related posts by tags, but noticed that if I have a picture of a bird tagged white and a table also tagged white, the table will show up in the related posts section of the bird. Is there a way to only get posts that match at least one of the categories, so bird and table won't be related, unless they share one category? I tried setting an category__in array inside $args, but no luck! <?php $tags = wp_get_post_tags($post->ID); if ($tags) { $tag_ids = array(); foreach($tags as $individual_tag) $tag_ids[] = $individual_tag->term_id; $args=array( 'tag__in' => $tag_ids, 'post__not_in' => array($post->ID), 'showposts'=>6, // Number of related posts that will be shown. 'caller_get_posts'=>1 ); $my_query = new wp_query($args); if( $my_query->have_posts() ) { while ($my_query->have_posts()) { $my_query->the_post(); ?> <?php if ( get_post_meta($post->ID, 'Image', true) ) { ?> <a style="text-decoration: none;" href="<?php the_permalink(); ?>"> <img style="max-height: 125px; margin: 15px 10px; vertical-align: middle;" src="<?php bloginfo('template_directory'); ?>/scripts/timthumb.php?src=<?php echo get_post_meta($post->ID, "Image", $single = true); ?>&w=125&zc=1" alt="" /> </a> <?php } ?> <?php } } } ?>

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • Deploying a WAR to tomcat only using a context descriptor

    - by DanglingElse
    i need to deploy a web application in WAR format to a remote tomcat6 server. The thing is that i don't want to do that the easy way, meaning not just copy/paste the WAR file to /webapps. So the second choice is to create a unique "Context Descriptor" and pointing this out to the WAR file. (Hope i got that right till here) So i have a few questions: Is the WAR file allowed to be anywhere in the file system? Meaning can i copy the WAR file anywhere in the remote file system, except /webapps or any other folder of the tomcat6 installation? Is there an easy way to test whether the deployment was successful or not? Without using any browser or anything, since i'm reaching to the remote server only via SSH and terminal. (I'm thinking ping?) Is it normal that the startup.sh/shutdown.sh don't exist? I'm not the admin of the server and don't know how the tomcat6 is installed. But i'm sure that in my local tomcat installations these files are in /bin and ready to use. I mean you can still start/restart/stop the tomcat etc., but not with the these -standard?- scripts. Thanks a lot.

    Read the article

  • Accept All Cookies via HttpClient

    - by Vinay
    So this is currently how my app is set up: 1.) Login Activity. 2.) Once logged in, other activities may be fired up that use PHP scripts that require the cookies sent from logging in. I am using one HttpClient across my app to ensure that the same cookies are used, but my problem is that I am getting 2 of the 3 cookies rejected. I do not care about the validity of the cookies, but I do need them to be accepted. I tried setting the CookiePolicy, but that hasn't worked either. This is what logcat is saying: 11-26 10:33:57.613: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0] [name: cookie_user_id][value: 1][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php" 11-26 10:33:57.593: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0][name: cookie_session_id][value: 1985208971][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php" I am sure that my actual code is correct (my app still logs in correctly, just doesn't accept the aforementioned cookies), but here it is anyway: HttpGet httpget = new HttpGet(//MY URL); HttpResponse response; response = Main.httpclient.execute(httpget); HttpEntity entity = response.getEntity(); InputStream in = entity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); StringBuilder sb = new StringBuilder(); From here I use the StringBuilder to simply get the String of the response. Nothing fancy. I understand that the reason my cookies are being rejected is because of an "Illegal path attribute" (I am running a script at /mobile-api/login.php whereas the cookie will return with a path of just "/" for trackallthethings), but I would like to accept the cookies anyhow. Is there a way to do this?

    Read the article

  • Handle window scroll event in greasemonkey script

    - by Akim Khalilov
    Hi. I need some advice. I have a web page and want to extend it's functionality with greasemonkey script and firefox. When page has loaded I need run custom function during user's page scrolling (with mouse whell or scrollbar). I want show some div block when user scrolling down and hide it when he scrolling to the top. But I met some problem - I couldn't assign event handler to the onscroll event. I use next part of the code: function showFixedBlock(){ ... } function onScrollStart(){ ... showFixedBlock(); ... } window.onscroll = onScrollStart; I test this piece of code on my test html page and it works, but when I copy it into greasemonkey, script doesn't work. Should I assign onscroll event handler during page loading? As I know greasemonkey execute it's scripts when page has loaded? Is it the reason of the problem? Is there some additional requirments to handle 'onscroll' event? How can I do that? Thanks.

    Read the article

  • How to re-prompt after a trap return in bash?

    - by verbose
    I have a script that is supposed to trap SIGTERM and SIGTSTP. This is what I have in the main block: trap 'killHandling' TERM And in the function: killHandling () { echo received kill signal, ignoring return } ... and similar for SIGINT. The problem is one of user interface. The script prompts the user for some input, and if the SIGTERM or SIGINT occurs when the script is waiting for input, it's confusing. Here is the output in that case: Enter something: # SIGTERM received received kill signal, ignoring # shell waits at blank line for user input, user gets confused # user hits "return", which then gets read as blank input from the user # bad things happen because of the blank input I have definitely seen scripts which handle this more elegantly, like so: Enter something: # SIGTERM received received kill signal, ignoring Enter something: # re-prompts user for user input, user is not confused What is the mechanism used to accomplish the latter? Unfortunately I can't simply change my trap code to do the re-prompt as the script prompts the user for several things and what the prompt says is context-dependent. And there has to be a better way than writing context-dependent trap functions. I'd be very grateful for any pointers. Thanks!

    Read the article

  • Sample twitter application

    - by Jack
    I have installed WAMP and running PHP scripts on localhost. I have enabled cURL. Here is my code. <?php function updateTwitter($status) { // Twitter login information $username = 'xxxxx'; $password = 'xxxxx'; // The url of the update function $url = 'http://twitter.com/statuses/update.xml'; // Arguments we are posting to Twitter $postargs = 'status='.urlencode($status); // Will store the response we get from Twitter $responseInfo=array(); // Initialize CURL $ch = curl_init($url); // Tell CURL we are doing a POST curl_setopt($ch, CURLOPT_PROXY,"localhost:80"); curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_NOBODY, 0); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_FOLLOWLOCATION,1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // execute the CURL call $response = curl_exec($ch); if($response === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors<br/>'; } // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } curl_close($ch); } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> Here's my output Operation completed without any errors Error: 404 Here's what error 404 means 404 Not Found: The URI requested is invalid or the resource requested, such as a user, does not exists. Please help.

    Read the article

  • R plotting multiple histograms on single plot to .pdf as a part of R batch script

    - by Bryce Thomas
    I am writing R scripts which play just a small role in a chain of commands I am executing from a terminal. Basically, I do much of my data manipulation in a Python script and then pipe the output to my R script for plotting. So, from the terminal I execute commands which look something like $python whatever.py | R CMD BATCH do_some_plotting.R. This workflow has been working well for me so far, though I have now reached a point where I want to overlay multiple histograms on the same plot, inspired by this answer to another user's question on Stackoverflow. Inside my R script, my plotting code looks like this: pdf("my_output.pdf") plot(hist(d$original,breaks="FD",prob=TRUE), col=rgb(0,0,1,1/4),xlim=c(0,4000),main="original - This plot is in beta") plot(hist(d$minus_thirty_minutes,breaks="FD",prob=TRUE), col=rgb(1,0,0,1/4),add=T,xlim=c(0,4000),main="minus_thirty_minutes - This plot is in beta") Notably, I am using add=T, which is presumably meant to specify that the second plot should be overlaid on top of the first. When my script has finished, the result I am getting is not two histograms overlaid on top of each other, but rather a 3-page PDF whose 3 individual plots contain the titles: i) Histogram of d$original ii) original - This plot is in beta iii) Histogram of d$minus_thirty_minutes So there's two points here I'm looking to clarify. Firstly, even if the plots weren't overlaid, I would expect just a 2-page PDF, not a 3-page PDF. Can someone explain why I am getting a 3-page PDF? Secondly, is there a correction I can make here somewhere to get just the two histograms plotted, and both of them on the same plot (i.e. 1-page PDF)? The other Stackoverflow question/answer I linked to in the first paragraph did mention that alpha-blending isn't supported on all devices, and so I'm curious whether this has anything to do with it. Either way, it would be good to know if there is a R-based solution to my problem or whether I'm going to have to pipe my data into a different language/plotting engine.

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • Prolog singleton variables in Python

    - by Rubens
    I'm working on a little set of scripts in python, and I came to this: line = "a b c d e f g" a, b, c, d, e, f, g = line.split() I'm quite aware of the fact that these are decisions taken during implementation, but shouldn't (or does) python offer something like: _, _, var_needed, _, _, another_var_needed, _ = line.split() as well as Prolog does offer, in order to exclude the famous singleton variables. I'm not sure, but wouldn't it avoid unnecessary allocation? Or creating references to the result of the split call does not count up as overhead? EDIT: Sorry, my point here is: in Prolog, as far as I'm concerned, in an expression like: test(L, N) :- test(L, 0, N). test([], N, N). test([_|T], M, N) :- V is M + 1, test(T, V, N). The variable represented by _ is not accessible, for what I suppose the reference to the value that does exist in the list [_|T] is not even created. But, in Python, if I use _, I can use the last value assigned to _, and also, I do suppose the assignment occurs for each of the variables _ -- which may be considered an overhead. My question here is if shouldn't there be (or if there is) a syntax to avoid such unnecessary attributions.

    Read the article

  • Mail Rule to Run Applescript Not Working?

    - by William Andy Hainline
    Question for my fellow AppleScripters. I have the following script which runs just fine if you run it in AppleScript Editor or Script Debugger, but that won't run at all if you try to run it from a Mail rule. The script is correctly placed in ~/Library/Application Scripts/com.apple.mail, and shows up in the "Run AppleScript" menu in the Mail rule, but simply refuses to work when new mail arrives. on perform_mail_action(info) tell application "Mail" set theMessages to |SelectedMessages| of info repeat with thisMessage in theMessages set AppleScript's text item delimiters to {""} set thisSender to sender of thisMessage as string set quotepos to offset of "\"" in thisSender if (quotepos is not 0) then set thisSender to (text items (quotepos + 1) through -1) ¬ of thisSender as string set quotepos to offset of "\"" in thisSender if (quotepos is not 0) then set thisSender to (text items 1 through (quotepos - 1)) ¬ of thisSender as string end if else set atpos to offset of "@" in thisSender if (atpos is not 0) then set thisSender to (text items 1 through (atpos - 1)) ¬ of thisSender as string end if set brkpos to offset of "<" in thisSender if (brkpos is not 0) then set thisSender to (text items (brkpos + 1) through -1) ¬ of thisSender as string end if end if tell application "Finder" to say "Mail from " & thisSender end repeat end tell end perform_mail_action Any ideas?

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • jQuery UI - Sortable isn't firing?

    - by Kenny Bones
    Hi, I'm trying to get the jQuery UI sortable plugin to work and I've created a list that looks like this: <ul id="sortable"> <li>Item 1</li> <li>Item 2</li> <li>Item 3</li> <li>Item 4</li> <li>Item 5</li> </ul> And I've included the plugin script files: $(function() { $("#sortable").sortable(); alert('test'); $("#sortable").disableSelection(); }); So I just tried putting the alert box before .sortable is run and the alertbox is showing. But putting it after .sortable isn't working. Which means that .sortable is failing right? I've included the scripts and put them in the head of the html document. <script type="text/javascript" src="js/jquery.ui.core.min.js"></script> <script type="text/javascript" src="js/jquery.ui.mouse.min.js"></script> <script type="text/javascript" src="js/jquery.ui.sortable.min.js"></script> <script type="text/javascript" src="js/jquery.ui.widget.min.js"></script> Which is correct right? And the function that actually runs .sortable is in a merged js file along with all other js snippets and plugins.

    Read the article

  • Import Error: No module named testrunner

    - by JiL
    I followed this to add zc.recipe.testrunner to my buildout. I can run buildout successfully but when I run bin/test, I get: ImportError: No module named testrunner I have zope.testrunner-4.0.4-py2.4.egg in /usr/local/lib/python2.4/site-packages I also pinned zope.testrunner = 4.0.4 zc.recipe.testruner = 1.4.0 zc.recipe.egg = 1.3.2 When I ran buildout, I used -vvv and I got: ... Installing 'zc.recipe.testrunner'. We have the distribution that satisfies 'zc.recipe.testrunner==1.4.0'. Egg from site-packages: z3c.recipe.scripts 1.0.1 Egg from site-packages: zope.testrunner 4.0.4 Egg from site-packages: zope.interface 3.8.0 Egg from site-packages: zope.exceptions 3.7.1 ... We have the distribution that satisfies 'zope.testrunner==4.0.4'. Egg from site-packages: zope.testrunner 4.0.4 Adding required 'zope.interface' required by zope.testrunner 4.0.4. We have a develop egg: zope.interface 0.0 Adding required 'zope.exceptions' required by zope.testrunner 4.0.4. We have a develop egg: zope.exceptions 0.0 ... Why is it I get an ImportError? Is zope.testrunner not installed correctly?

    Read the article

  • Problem executing trackPageview with Google Analytics.

    - by dmrnj
    I'm trying to capture the clicks of certain download links and track them in Google Analytics. Here's my code var links = document.getElementsByTagName("a"); for (var i = 0; i < links.length; i++) { linkpath = links[i].pathname; if( linkpath.match(/\.(pdf|xls|ppt|doc|zip|txt)$/) || links[i].href.indexOf("mode=pdf") >=0 ){ //this matches our search addClickTracker(links[i]); } } function addClickTracker(obj){ if (obj.addEventListener) { obj.addEventListener('click', track , true); } else if (obj.attachEvent) { obj.attachEvent("on" + 'click', track); } } function track(e){ linkhref = (e.srcElement) ? e.srcElement.pathname : this.pathname; pageTracker._trackPageview(linkhref); } Everything up until the pageTracker._trackPageview() call works. In my debugging linkhref is being passed fine as a string. No abnormal characters, nothing. The issue is that, watching my http requests, Google never makes a second call to the tracking gif (as it does if you call this function in an "onclick" property). Calling the tracker from my JS console also works as expected. It's only in my listener. Could it be that my listener is not deferring the default action (loading the new page) before it has a chance to contact Google's servers? I've seen other tracking scripts that do a similar thing without any deferral.

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Changing Apache2.2.11 httpd.conf has no effect

    - by Adrian
    Hi, Hopefully someone can help here. I recently installed wampserver ver 2.0 with Apache ver 2.2.11. My issue is, I have some large php scripts which timeout at the default 5 min (300 sec) browser limit (I'm using ie8). It is critcal I get this limit extended. I have tried changing the httpd.conf file to include the following: TimeOut 1200 My objective was to set the timeout at 1200 seconds, or 20 min. I had just chosen a random location to place this directive within the httpd.conf file as I cannot locate any documentation to suggest it belongs in a specific place within the file. Regardless, the changes I make appear in the httpd.conf file that can be found in the system tray for wampserver, however they have no effect - the browser still times out after 5 minutes. I thought perhaps I had the capitals incorrect, so I changed to: Timeout 1200 This change had no effect either. Can someone please help, this is very frustrating. Maybe the command can only be used within a specific module? If so, I have no idea which one, nor do I know the syntax to specify this. Regards Adrian.

    Read the article

  • JQuery fadeIn fadeOut loop issue

    - by Tarun
    I am trying to create a jQuery fadeIn fadeout effect for my page content using the code below. $(document).ready(function (){ $("#main").click(function(){ $("#content").fadeOut(800, function(){ $("#content").load("main.html", function(){ $("#content").fadeIn(800); }); }); }); $("#gallery").click(function(){ $("#content").fadeOut(800, function(){ $("#content").load("gallery.html", function(){ $("#content").fadeIn(800); }); }); }); }); So whenever a user clicks on either the main link or gallery link, the old content fades out and new content fades in. The problem I am facing is that for every link I have to repeat the same code again and again. So I tried to use a loop to simplify this but it doesn't work. Here is my code: var p = ["#main","#gallery", "#contact"]; var q = ["main.html", "gallery.html", "contact.html"]; for (i=0;i<=(p.length-1);i++){ $(p[i]).click(function(){ $("#content").fadeOut(500, function(){ $("#content").load(q[i], function(){ $("#content").fadeIn(500); }); }); }); } It works fine when I write repeat the scripts for each link but it doesn't work when I combine them in a loop. I only get the FadeOut effect and nothing happens after that. This might be a very simple issue or may be something deep into jQuery. Any hint or help is greatly appreciated. TK

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >