Search Results

Search found 24498 results on 980 pages for 'lock pages in memory'.

Page 701/980 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • Detect Installed Application URI Handler on Webkit browsers

    - by Punit Raizada
    Guys, I have a question mainly related to the Iphone web browser but I am hoping the same solution would work on other browsers that are webkit based. I have a application (Iphone + Android) that registers a handler for custom URI (appuri://) on the Phone. I am able to launch the application by making a link to "appuri://act/launch" from my web pages. This works only if my application is installed on the device. If the device does not have the app installed then a message comes up "Safari was not able to open ....". What I want to do is detect if the URI Scheme is supported from the browser and then prompt my own message saying "please download the app ..blah blah blah" if the handler for the URI scheme is not found. Is there a way I can detect or find the list of URL Scheme handlers on the Phone from the Web Browser ?

    Read the article

  • Studying MySQL, SQLite source code to learn about RDBMS implementation

    - by Yang
    I know implementing database is a huge topic, but I want to have a basic understanding of how database systems work (e.g. memory management, binary tree, transaction, sql parsing, multi-threading, partitions, etc) by investigating the source code of the database. Since there are a few already proven very robust open source databases like mysql, sqlite and so on. However, the code are very complicated and I have no clue where to start. Also I find that the old school database textbooks are only explaining the theory, not the implementation details. Can anyone suggest how I should get started and if there are any books that emphasis on the technology and techniques of building dbms used in modern database industry?

    Read the article

  • ASP.NET Session expires in no time?

    - by Galilyou
    Weired problem! ASP.NET Session expires instantly. In my web.config I have this session settings: <sessionState mode="InProc" timeout="10000" /> AFAIK the timeout attribute's value is in minutes and can't be greater than 525,600 minutes (1 year). I don't understand what I am doing wrong here. Why is the session expiring. Is it a server memory issue? I don't think so, the server is pretty descent and it has only one site which isn't doing much after all. Ideas? EDIT: After setting the cookiless attribute to true, and while noticing the session id on the url, I can see that the session id CHANGING. I assume that this means the session is expiring. The IIS Settings are correct AFAIK (the enable session state checkbox is checked, and the value of the time is 20). A Picture is worth 100 words:

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Best tools to monitor Tomcat

    - by Pier Luigi
    Hi all, I'm searching free tools for monitor tomcat (traffic, memory usage, threads, requests, CPU, logs,...). I'm currently using lambdaprobe on Tomcat 5.5.x, but it seems that is no more developed (or not? the site lambdaprobe.org is always down for me...). Has someone good experiences to share? In lambdaprobe there are some info available only if tomcat is instrumented with JMX. Well, JMX is something of strange and mysterious for me. Is a good solution in a production server? It's worth to spend my (little) time to learn it?

    Read the article

  • What is a good Java crawler library?

    - by DrDee
    Hi, I am about to develop a crawler in Java but don't feel like reinventing the wheel. A quick Google search gives a whole bunch of Java libraries to build a web crawler. Besides that Nutch is of course a very robust package but seems a bit too advanced for my needs. I only need to crawl a handful websites a week containing a couple of 1000 pages each. Which open source Java library would you recommend considering: speed multithreading (or even distributed) extending it with new functionality active maintained and documentation?

    Read the article

  • SEO chaos from changing robots.txt file in Wordpress site

    - by Seedorf
    Hi there, I recently edited the robots.txt file in my site using a wordpress plugin. However, since i did this, google seems to have removed my site from their search page. I'd appreciate if I could get an expert opinion on why this is so, and a possible solution. I'd initially done it to increase my search ranking by limiting the pages being accessed by google. This is my robots.txt file in wordpress: User-agent: * Disallow: /cgi-bin Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content/plugins Disallow: /wp-content/cache Disallow: /trackback Disallow: /feed Disallow: /comments Disallow: /category/*/* Disallow: */trackback Disallow: */feed Disallow: */comments Disallow: /*?* Disallow: /*? Allow: /wp-content/uploads Sitemap: http://www.instant-wine-cellar.co.uk/wp-content/themes/Wineconcepts/Sitemap.xml

    Read the article

  • Large XML files in dataset (outofmemory)

    - by dklein
    Hi folks, I am currently trying to load a slightly large xml file into a dataset. The xml file is about 700 MB and every time I try to read the xml it needs plenty of time and after a while it throws an "out of memory" exception. DataSet ds = new DataSet(); ds.ReadXml(pathtofile); The main problem is, that it is necessary for me to use those datasets (I use it to import the data from xml file into a sybase database (foreach table, foreach row, foreach column)) and that I have no scheme file. I already googled a while, but I did only find solutions that won't be usable for me.

    Read the article

  • Deny Switching to Tabpage in a tabcontrol

    - by Maneesh
    I have a form(C#) with a tab control and its has around five tab pages. each of the tab have a few textboxes. 1) if a User is in say Tab A and edits certain fields i need to validate the text enetered if found invalid then i should not allow any tab switch ? is that possible? 2) Another case could be ... user edits some values and clicks on another tab, on doing so i need to check if the values that were enetered for Tab A is correct or not ? can i do this? I am a novice to C#... so may be these questions sound very basic any help will be appreciated. also i want to know what are these events of a tab page Leave, validated or validating ?

    Read the article

  • Use javac fork attribute with IBM JDK

    - by avjaz
    Hi - I have a large ant build that I'm working on, that is currently running out of memory. One ways I've read that can help mitigate this problem is to use javac fork="true" to run javac in a separate jvm. My problem is that I need to compile the project with the IBM JDK (this is not the JDK referenced by JAVA_HOME, and I would prefer it not to be). I tried setting the executable attribute of Ant's javac, to the path to IBM's javac but no joy (the project still won't compile). Ant's docs for the executable attribute state: Complete path to the javac executable to use in case of fork="yes". Defaults to the compiler of the Java version that is currently running Ant. Ignored if fork="no". Since Ant 1.6 this attribute can also be used to specify the path to the executable when using jikes, jvc, gcj or sj. Does anyone have any ideas? Thanks -

    Read the article

  • Entity Framework 4 relationship management in POCO Templates - More lazy than FixupCollection?

    - by Joe Wood
    I've been taking a look at EF4 POCO templates in beta 2. The FixupCollection looks fine for maintaining the model correctness after updating the relationship collection property (i.e. product.Orders it would set the order.Product reference ). But what about support for handling the scenario when some of those Order objects are removed from the context? The use-case of maintaining cascading deletes in the in-memory model. The old Typed DataSet model used to do this by performing the query through the container to derive the relationship results. Like the DataSet, this would require a reference to the ObjectContext inside the entity class so that it could query the top-level Order collection. Better support for Separation of Concerns in the ObjectContext would be required. It looks like EF is not suited to this use-case that DataSets did out of the box.... am I right?

    Read the article

  • Scrape HTML tables from a given URL into CSV

    - by dreeves
    I seek a tool that can be run on the command line like so: tablescrape 'http://someURL.foo.com' [n] If n is not specified and there's more than one HTML table on the page, it should summarize them (header row, total number of rows) in a numbered list. If n is specified or if there's only one table, it should parse the table and spit it to stdout as CSV or TSV. Potential additional features: To be really fancy you could parse a table within a table, but for my purposes -- fetching data from wikipedia pages and the like -- that's overkill. The Perl module HTML::TableExtract can do this and may be good place to start for writing the tool I have in mind. An option to asciify any unicode. An option to apply an arbitrary regex substitution for fixing weirdnesses in the parsed table. Related questions: http://stackoverflow.com/questions/259091/how-can-i-scrape-an-html-table-to-csv http://stackoverflow.com/questions/1403087/how-can-i-convert-an-html-table-to-csv http://stackoverflow.com/questions/2861/options-for-html-scraping

    Read the article

  • Merge items in nanoc

    - by Gordon Potter
    I have been trying to use nanoc for generating a static website. I need to organize a complex arrangement pages I want to keep my content DRY. How does the concept of includes or merges work within the nanoc system? I have read the docs but I can't seem to find what I want. For example: how can I take two partial content items and merge them together into a new content item. In staticmatic you can do some like the following inside your page. = partial('partials/shared/navigation') How would a similar convention work within nanoc?

    Read the article

  • How do I access the popup page DOM from bg page in Chrome extension?

    - by Fletcher Moore
    In Google Chrome's extension developer section, it says The HTML pages inside an extension have complete access to each other's DOMs, and they can invoke functions on each other. ... The popup's contents are a web page defined by an HTML file (popup.html). The popup doesn't need to duplicate code that's in the background page (background.html) because the popup can invoke functions on the background page I've loaded and tested jQuery, and can access DOM elements in background.html with jQuery, but I cannot figure out how to get access to DOM elements in popup.html from background.html.

    Read the article

  • Memcached Debuging/Server Logs Monitor the Memcached Servers?

    - by user1179459
    I have chat engine which is based on the Memcached variables, putting them into arrays and reading them in other end via jquery, which works fine 95% of the times, however when the server load is high memcached (presume its the memcached) the crash and browser gets stucks up. I dont think its jquery issue since this only happens when the server load is very high. I need a way to monitor the memcached servers or somehow write a log file into where the fails/errors comes in... Any idea on how i can do this ? or any idea why memcached servers fails ? I run the memcached as follows $GLOBALS['MemCached'] = FALSE; $GLOBALS['MemCached'] = new Memcache; $GLOBALS['MemCached']->pconnect('localhost', 11211); My memcached config is as follows #! /bin/sh # # chkconfig: - 55 45 # description: The memcached daemon is a network memory cache service. # processname: memcached # config: /etc/sysconfig/memcached # pidfile: /var/run/memcached/memcached.pid # Standard LSB functions #. /lib/lsb/init-functions # Source function library. . /etc/init.d/functions PORT=11211 USER=memcached MAXCONN=1024 CACHESIZE=128 OPTIONS="" if [ -f /etc/sysconfig/memcached ];then . /etc/sysconfig/memcached fi # Check that networking is up. . /etc/sysconfig/network if [ "$NETWORKING" = "no" ] then exit 0 fi RETVAL=0 prog="memcached" pidfile=${PIDFILE-/var/run/memcached/memcached.pid} lockfile=${LOCKFILE-/var/lock/subsys/memcached} start () { echo -n $"Starting $prog: " # Ensure that /var/run/memcached has proper permissions if [ "`stat -c %U /var/run/memcached`" != "$USER" ]; then chown $USER /var/run/memcached fi daemon --pidfile ${pidfile} memcached -d -p $PORT -u $USER -m $CACHESIZE -c $MAXCONN -P ${pidfile} $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch ${lockfile} } stop () { echo -n $"Stopping $prog: " killproc -p ${pidfile} /usr/bin/memcached RETVAL=$? echo if [ $RETVAL -eq 0 ] ; then rm -f ${lockfile} ${pidfile} fi } restart () { stop start } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p ${pidfile} memcached RETVAL=$? ;; restart|reload|force-reload) restart ;; condrestart|try-restart) [ -f ${lockfile} ] && restart || : ;; *) echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}" RETVAL=2 ;; esac exit $RETVAL

    Read the article

  • MVVM Good Design. DataSet or a RowViewModel

    - by LnDCobra
    I have just started learning MVVM and having a dilemna. If I have a a main ViewModel and inside this model I have a number of datasets. Now should I be creating a new ViewModel for each row inside the dataset? Or expose the DataSet itself as a DependencyProperty? For now the dataset has about 20 rows inside it, and the thought of iterating through each row to create a ViewModel binding to each row.... might not be the best option for performance reasons and memory reasons in the future, like when there are 1000+ rows. Should I still go ahead and create a RowViewModel and iterate through the dataset? And have an ObservableCollection of it or just expose the dataset? Any help would be greatly appreciated.

    Read the article

  • SSL on Heroku / User Authentication Across Multiple Domains

    - by Euwyn
    Posted a previous question on this, but have a followup. I was trying to create a workaround to use SSL on the expensive custom domain. I'm willing to live with bumping a user to https://app.heroku.com from http://www.app.com for certain secure pages, and have monkey-patched SSL required to make this happen. However, now this issue is with making sure my User is logged in when I do so. As I understand, cookies aren't cross domain. Is there a way around this issue?

    Read the article

  • Elisp performance on Windows and Linux

    - by JasonFruit
    I have the following dead simple elisp functions; the first removes the fill breaks from the current paragraph, and the second loops through the current document applying the first to each paragraph in turn, in effect removing all single line-breaks from the document. It runs fast on my low-spec Puppy Linux box using emacs 22.3 (10 seconds for 600 pages of Thomas Aquinas), but when I go to a powerful Windows XP machine with emacs 21.3, it takes almost an hour to do the same document. What can I do to make it run as well on the Windows machine with emacs 21.3? (defun remove-line-breaks () "Remove line endings in a paragraph." (interactive) (let ((fill-column 90002000)) (fill-paragraph nil))) : (defun remove-all-line-breaks () "Remove all single line-breaks in a document" (interactive) (while (not (= (point) (buffer-end 1))) (remove-line-breaks) (next-line 1))) Forgive my poor elisp; I'm having great fun learning Lisp and starting to use the power of emacs, but I'm new to it yet.

    Read the article

  • How to map old paths to Drupal paths

    - by kidrobot
    I'm converting a Wordpress blog to Drupal and need to map the WP paths to the new Drupal ones. What's the best practice for doing this? There are only around a hundred pages to map. I've been experimenting with the URL Alter module, which provides an alternative to messing with custom_url_rewrite functions settings.php but keep getting 404. Waiting to hear back from the module maintainer if this is what the module is intended for. In the meantime I am wondering how others do this? Should I be using .htaccess?

    Read the article

  • internet explorer ashx file problem

    - by vondip
    My problem is a bit complicated: I am writing in c#, asp.net and using jquery I have a page that sends requests to the server using jquery's ajax method. I have a ashx file (handler) to respond to these request. User can perform several changes on several pages, then use some method that will call the ajax method. My ashx file reads some values From the session variables and acts accordingly. This works fine in all browsers but in internet explorer. In internet explorer the session seems to hold old information (old user ids'). It's incredible, the same code works fine in firefox, chrome and safari but fails with ie. What could be causing it? I have no clue where to even start looking for a solution. btw, Sorry for the general title, couldn't figure out how to explain in just few words. Thank You!

    Read the article

  • com.sun.management.OperatingSystemMXBean use in an OSGi bundle

    - by Paul Whelan
    I have some legacy code that was used to monitor my applications cpu,memory etc that I want to convert to a bundle. Now when i start this bundle its complaining Missing Constraint: Import-Package: com.sun.management; version="0.0.0" I had used the OperatingSystemMXBean to get access to stats on the JVM. My question is can I use this class inside an OSGI container and if so how? Or should I use some other way to monitor my application. I was making an RMI call to the application from a web frontend to get the nodes performance figures pre OSGi.

    Read the article

  • Cannot use Html.ActionLink in asp.net mvc spark files

    - by midas06
    I'm using the spark view engine with my asp.net mvc application. In my aspx pages, I can succesfully use Html.Actionlink, but when I attempt it in spark files, it doesnt show up in intellisense, and when i try to run it anyway, i get: Dynamic view compilation failed. c:\Users\midas\Documents\Visual Studio 2008\Projects\ChurchMVC\ChurchMVC\Views\Home\Index.spark(73,25): error CS1061: 'System.Web.Mvc.HtmlHelper' does not contain a definition for 'ActionLink' and no extension method 'ActionLink' accepting a first argument of type 'System.Web.Mvc.HtmlHelper' could be found (are you missing a using directive or an assembly reference?) I do have system.web.mvc referenced, and I have added in _global.spark. None of that helps. Any ideas?

    Read the article

  • WebDAV auto-versioning in Git or Hg or any modern VCS

    - by Marcus P S
    I just recently learned of SVN's auto-versioning feature for WebDAV. Although I understand this is not replacement for proper versioning, with messages documenting change sets, it strikes me as a solid and safe replacement to Dropbox (minus nice GUIs and web pages). However, since commits in auto-versioning are frequent, I'd imagine that Git or Hg would be better suited for this, just because of their more compact databases (although I wonder if the distributed nature of things could make the automation ugly for resolving conflicts). Is this a feature that has been implemented using Git or Hg, as far as anyone knows?

    Read the article

  • web.config, configSource, and "The 'xxx' element is not declared" warning.

    - by UpTheCreek
    I have broken down the horribly unwieldy web.config file into individual files for some of the sections (e.g. connectionStrings, authentication, pages etc.) using the configSource attribute. This is working file, but the individual xml files that hold the section 'snippets' cause warnings in VS. For example, a file named roleManager.config is used for the role manager section, and looks like this: <roleManager enabled="false"> </rolemanager> However I get a blue squiggle under the roleManager element in VS, and the following warning: The 'roleManager' element is not declared I guess this is something to do with valid xml and schemas etc. Is there an easy way to fix this? Something I can add to the individual files? Thanks P.S. I have heard that it is bad practice to break the web.config file out like this. But don't really understand why - can anyone illuminate me?

    Read the article

  • How to eliminate a sub-directory level from all URLs in Website

    - by frank13
    I have a website and I just setup an os shopping cart (ie., Magento) I installed the cart in a sub-directory off the document root as /magento/ per the installation guidelines. So my web site cart's URL is http://mydomain.com/magento/ I have no public pages off the document root and I actually want my cart to be my home page -- in other words, I want http://mydomain.com/magento/ to resolve as http://mydomain.com/ Is it possible? Can I use mod-rewrite to make it happen? If so, can you suggest what the mod-rewrite directives would look like? Or is it simply a permanent redirect like: redirect 301 /magento http://mydomain.com/ Thanks.

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >