Search Results

Search found 25196 results on 1008 pages for 'hard drive cache'.

Page 201/1008 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • ASP.net AppendHeader not working in ASP MVC

    - by Chao
    I'm having problems getting AppendHeader to work properly if I am also using an authorize filter. I'm using an actionfilter for my AJAX actions that applies Expires, Last-Modified, Cache-Control and Pragma (though while testing I have tried including it in the action method itself with no change in results). If I don't have an authorize filter the headers work fine. Once I add the filter the headers I tried to add get stripped. The headers I want to add Response.AppendHeader("Expires", "Sun, 19 Nov 1978 05:00:00 GMT"); Response.AppendHeader("Last-Modified", String.Format("{0:r}", DateTime.Now)); Response.AppendHeader("Cache-Control", "no-store, no-cache, must-revalidate"); Response.AppendHeader("Cache-Control", "post-check=0, pre-check=0"); Response.AppendHeader("Pragma", "no-cache"); An example of the headers from a correct page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:22:24 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache Expires Sun, 19 Nov 1978 05:00:00 GMT Last-Modified Mon, 14 Jun 2010 18:22:24 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type text/html; charset=utf-8 Content-Length 352 Connection Close And from an incorrect page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:27:34 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache, no-cache Cache-Control private, s-maxage=0 Content-Type text/html; charset=utf-8 Content-Length 4937 Connection Close

    Read the article

  • [ask][php] find dynamic filename exist

    - by r4ccoon
    hi. i am writing a cache module in php. it tries to write a cache with a $string+timestamp as a filename. i dont have problem with writing the cache. the problem is i do a foreach loop to get the cache that i want. this is the logic that i use for getting the cache foreach ($filenames as $filename){ if(strstr($filename,$cachename)){//if found if(check_timestamp($filename,time())) display_cace($filename); break; } } but when it tries to get and read the cache, it slows the server down. imagine that i have 10000 cache file in a folder, and i need to check for every file in that cache folder. so how do you think the best way of doing this. here i explain again, because even me still dont understand my written question.. :D i write cache file with this format filename_timestamp.. e.g cache_function_random_news_191982899010 in a folder ./cache/ when i want to get the cache, i only pass "cache_function_random_news_" and check recursively on that folder. if i find something with that needle on a file name, display it, and break. but checking recursively on a 10000 files in a folder is not a good thing yeah? please give me your opinion ok, that would clarify more. thanks.

    Read the article

  • does anyone know why apt-cacher-ng always downloading index file (Packages.gz) even though its exist on the apt-cacher-ng's cache?

    - by soekarmana
    just updated from 11.04 to 12.04, fresh install installed apt-cacher-ng and notice something strange about it its always downloading index file (Packages.gz) even though the file exist on the apt-cacher-ng's cache, so this is what exactly happened : on ubuntu 10.10 & 11.04 apt-cacher-ng installed & configured on my laptop, then i reload & install some packages after that i configure my friend's laptop with apt-cacher-ng proxy (192.168.1.1:3142), reloading repository was blazingly fast, finished in a second without using my Internet connection (checked on system monitor, total Received just 15kB) on ubuntu 11.10 & 12.04 apt-cacher-ng installed & configured on my laptop, then i reload & install some packages after that i configure my friend's laptop with apt-cacher-ng proxy (192.168.1.1:3142), reloading repository was really slow!, apt-cacher-ng redownload the index file from internet

    Read the article

  • Linux 3.6 sort en version stable : veille hybride, TCP Fast Open, VFIO, améliorations de Btrfs et suppression du cache IPv4

    Linux 3.6 sort en version stable ajout de la veille hybride, TCP Fast Open, VFIO, améliorations de Btrfs et suppression du cache IPv4 Linus Torvalds vient d'annoncer la sortie de la version 3.6 stable du Kernel Linux. La nouveauté phare de cette mouture est l'introduction d'un mode de veille hybride, longtemps supporté par Windows et Mac OS X. L'option Suspend to Both (Veille et hibernation combinée) permet de suspendre l'activité de l'ordinateur tout en conservant le contenu de la mémoire vive sur le disque dur (uspend-to-disk) et ensuite une sauvegarde du système dans la mémoire (suspend-to-RAM). Le grand avantage de ces deux techniques liées est qu'elles permettent le retou...

    Read the article

  • Le framework PHP Jelix disponible en version 1.4 : compatibilité PSR0, templates virtuels et gestion du cache HTTP à la une

    Jelix 1.4 est disponible ! Compatibilité PSR0, templates virtuels et gestion du cache HTTP à la une du framework PHP Dans toute cette agitation de mise à jour de framework PHPn, on aurait presque oublié la sortie de Jelix. [IMG]http://idelways.developpez.com/news/images/jelix.png[/IMG] Jelix est et reste l'un des meilleurs frameworks PHP existants et cela par sa conception bien souvent en avance sur d'autres outils. Je pense à la modularité et à la gestion d'événements mises en place dans Jelix depuis de nombreuses années et qui font à peine leurs apparitions sur certains frameworks dits majeurs. Une nouvelle version majeure de Jelix ...

    Read the article

  • GRUB-2 Bootloader fails to load for lack of floppy drive. Ubuntu 10.4 & Windows XP

    - by kammer
    2010.07.21 while trying to install Ubuntu 10.4 Hello all, I've been trying to install Ubuntu 10.04 on my Dell workstation and am unable to get the Grub-2 bootloader to load properly. It seems to be failing for lack of a floppy drive on the system resulting in an error message that reads : error: fd0 cannot get C/H/S values. I've gone through the Grub-2 page at https://help.ubuntu.com/community/Grub2 to no avail and other sources having similar problems have likewise turned up no solutions. I would certainly appreciate any insight, here's the background: A while back I was trying to install a different version of Linux and had the same problems, then had to set the project aside for a bit. I don't think this has anything to do with Linux or Ubuntu per se, but rather Grub. The system is an old (4-5 years) Dell workstation that has one drive (128 GB) set up for Windows XP and a second new drive (500GB) which I installed for Linux. There is a DVD/CD drive and the system contains no floppy drive at all. In one attempt to get this working I tried modifying the BIOS to indicate there was a floppy drive - this created a failure earlier in the chain with the BIOS failing to load properly, not unexpected, just a shot in the dark at that point. At the moment I am considering just running out to buy and install a cheap floppy drive to see if that helps. I'll never use the thing though so I'd rather find a solution that doesn't require me to spend money on useless hardware. In any case, here's the /boot/grub/grub.cfg contents: # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=10 fi insmod play play 480 440 1 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=fbebde47-f488-41b0-9480-337802ecb988 ro quiet splash initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=fbebde47-f488-41b0-9480-337802ecb988 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Home Edition (on /dev/sda1)" { insmod ntfs set root='(hd0,1)' search --no-floppy --fs-uuid --set 6ef0d4b4f0d4842d drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### Thoughts anyone? Thanks in advance.

    Read the article

  • Optimizing Haskell code

    - by Masse
    I'm trying to learn Haskell and after an article in reddit about Markov text chains, I decided to implement Markov text generation first in Python and now in Haskell. However I noticed that my python implementation is way faster than the Haskell version, even Haskell is compiled to native code. I am wondering what I should do to make the Haskell code run faster and for now I believe it's so much slower because of using Data.Map instead of hashmaps, but I'm not sure I'll post the Python code and Haskell as well. With the same data, Python takes around 3 seconds and Haskell is closer to 16 seconds. It comes without saying that I'll take any constructive criticism :). import random import re import cPickle class Markov: def __init__(self, filenames): self.filenames = filenames self.cache = self.train(self.readfiles()) picklefd = open("dump", "w") cPickle.dump(self.cache, picklefd) picklefd.close() def train(self, text): splitted = re.findall(r"(\w+|[.!?',])", text) print "Total of %d splitted words" % (len(splitted)) cache = {} for i in xrange(len(splitted)-2): pair = (splitted[i], splitted[i+1]) followup = splitted[i+2] if pair in cache: if followup not in cache[pair]: cache[pair][followup] = 1 else: cache[pair][followup] += 1 else: cache[pair] = {followup: 1} return cache def readfiles(self): data = "" for filename in self.filenames: fd = open(filename) data += fd.read() fd.close() return data def concat(self, words): sentence = "" for word in words: if word in "'\",?!:;.": sentence = sentence[0:-1] + word + " " else: sentence += word + " " return sentence def pickword(self, words): temp = [(k, words[k]) for k in words] results = [] for (word, n) in temp: results.append(word) if n > 1: for i in xrange(n-1): results.append(word) return random.choice(results) def gentext(self, words): allwords = [k for k in self.cache] (first, second) = random.choice(filter(lambda (a,b): a.istitle(), [k for k in self.cache])) sentence = [first, second] while len(sentence) < words or sentence[-1] is not ".": current = (sentence[-2], sentence[-1]) if current in self.cache: followup = self.pickword(self.cache[current]) sentence.append(followup) else: print "Wasn't able to. Breaking" break print self.concat(sentence) Markov(["76.txt"]) -- module Markov ( train , fox ) where import Debug.Trace import qualified Data.Map as M import qualified System.Random as R import qualified Data.ByteString.Char8 as B type Database = M.Map (B.ByteString, B.ByteString) (M.Map B.ByteString Int) train :: [B.ByteString] -> Database train (x:y:[]) = M.empty train (x:y:z:xs) = let l = train (y:z:xs) in M.insertWith' (\new old -> M.insertWith' (+) z 1 old) (x, y) (M.singleton z 1) `seq` l main = do contents <- B.readFile "76.txt" print $ train $ B.words contents fox="The quick brown fox jumps over the brown fox who is slow jumps over the brown fox who is dead."

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • Why does lighttpd keep static files in cache, even when modified on disk ?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • "Loading operating system... boot error" when booting from live CD

    - by jeremy
    I'm having a problem installing Ubuntu 12.10 on a new drive. I was running Windows7 on my SSD but when the drive crashed, I decided to use that as an excuse to make the switch to Ubuntu. I've been experimenting with it on my old laptop until I got my SSD replaced under warranty. Now I have my SSD back and want to install Ubuntu on my desktop machine. I used UNetbootin to make a bootable flash drive. I then I went into my BIOS and made sure USB loaded before the hard drive. However, when I try to load it I get an error that says: Loading operating system ... boot error I know the flash drive works because if I reboot my laptop or my other Windows PC with the flash drive and it loads into Ubuntu...just when I try to do it in the PC with no OS currently on the drive.

    Read the article

  • Can't open a separate usbdrive when booting from a liveusb install 12.10

    - by Hulgerorsh
    My problem is using a separate flash drive when running from my live usb of ubuntu 12.10. The problem is that when I choose to Try Ubuntu from this usb drive it no longer lets me access my other flash drive. It automatically sets it to part of the live usb but it's a completely separate flash drive.Ubuntu loads fine and everything but when I go to access my other flash drive nothing happens. When I try to open it from the home folder it gives me Unable to mount 3.5 volume Adding read ACL for uid 999 to media/ubuntu failed: Operation not supported message. My live usb is on a 8gb flash drive and my other flash drive is 4g.

    Read the article

  • Why is the hard drive still full after deleting some files?

    - by julio
    I have a server running Ubuntu Server 12.xx. Today some services stopped and I found some messages about full disk, so I ran df -h: Filesystem Size Used Disp Use% /dev/mapper/ubuntu-root 455G 434G 0 100% / udev 1,7G 4,0K 1,7G 1% /dev tmpfs 689M 4,2M 685M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1,7G 0 1,7G 0% /run/shm /dev/sda1 228M 51M 166M 24% /boot overflow 1,0M 0 1,0M 0% /tmp I tried to delete some files remotely from a Windows computer by right-clicking and choosing "delete", but the hard drive remained full. Is there a Trash folder in Ubuntu Server? What could be happening?

    Read the article

  • It's hard to judge your own product. Is there a name for this issue?

    - by Epicmaster
    I was just talking to my partner about how hard it is to personally judge how good your product is after a while because you use it so often. You literally spend hours on your computer doing nothing but work on this Consumer Facing application, and you start to feel a little fatigue of using it over and over and over, at least a hundred times a day. You get scared this fatigue may mean the product you are building may have the same effect on the users and might mean you are doing something wrong. Is there a name for this in product development? For the fact that as a designer+ programmer+everything else, your product might not suck as much as you think simply because you spend way too much time with it, or a variation of this?

    Read the article

  • OK Programming language from USB stick with no installation

    - by tovare
    I'm looking for a compiler or interpreter for a language with basic math support and File IO which can be executed directly from a memorystick in either Linux or Windows. Built in functionality for basic datastructures and sorting/searching would be a plus. (I've read about movable python, but it only supports windows) Thank you

    Read the article

  • Caching items in Orchard

    - by Bertrand Le Roy
    Orchard has its own caching API that while built on top of ASP.NET's caching feature adds a couple of interesting twists. In addition to its usual work, the Orchard cache API must transparently separate the cache entries by tenant but beyond that, it does offer a more modern API. Here's for example how I'm using the API in the new version of my Favicon module: _cacheManager.Get( "Vandelay.Favicon.Url", ctx => { ctx.Monitor(_signals.When("Vandelay.Favicon.Changed")); var faviconSettings = ...; return faviconSettings.FaviconUrl; }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There is no need for any code to test for the existence of the cache entry or to later fill that entry. Seriously, how many times have you written code like this: var faviconUrl = (string)cache["Vandelay.Favicon.Url"]; if (faviconUrl == null) { faviconUrl = ...; cache.Add("Vandelay.Favicon.Url", faviconUrl, ...); } Orchard's cache API takes that control flow and internalizes it into the API so that you never have to write it again. Notice how even casting the object from the cache is no longer necessary as the type can be inferred from the return type of the Lambda. The Lambda itself is of course only hit when the cache entry is not found. In addition to fetching the object we're looking for, it also sets up the dependencies to monitor. You can monitor anything that implements IVolatileToken. Here, we are monitoring a specific signal ("Vandelay.Favicon.Changed") that can be triggered by other parts of the application like so: _signals.Trigger("Vandelay.Favicon.Changed"); In other words, you don't explicitly expire the cache entry. Instead, something happens that triggers the expiration. Other implementations of IVolatileToken include absolute expiration or monitoring of the files under a virtual path, but you can also come up with your own.

    Read the article

  • Developer Preview of JDK8, JavaFX8 *HARD-FLOAT ABI* for Linux/ARM Now Available!

    - by HecklerMark
    Just a quick post to spread the good word: the Developer Preview of JDK8 and JavaFX8 for Linux on ARM processors - hard-float ABI - is now available here. Right here. It's been tested on the Raspberry Pi, and many of us plan to (unofficially) test it on a variety of other ARM platforms. This could be the beginning of something big. So...what are you still doing here? Go download it already! (Did I mention you could get it here?) :-D All the best,Mark

    Read the article

  • How do I create a LiveUSB that lets me install any Ubuntu version?

    - by Dariopnc
    What I'd like to do: have Ubuntu installed on a USB drive and from there install any ubuntu version on a hdd. This is kinda different from using usb-creator because I'd like to have a persistent ubuntu install on the USB drive and not upgrade it every 6 months. From there I'd like to be able to install the most recent ubuntu version. I think it's just a matter of configuring ubiquity, but don't know if this is the case and how exactly do this. EDIT: Let's clarify the persistent thing: suppose I have my USB with ubuntu precise on it suppose quantal is out in the wild suppose that I want to install quantal on the hdd of a computer suppose that I want/can use only the USB drive with precise on it I should erase/upgrade precise on the USB drive to quantal and then install it on the hard drive I don't want to touch my ubuntu install on the USB drive, I'd like to be able to install the newer ubuntu version (quantal) on a hdd from another one (precise) on a USB drive I'd better avoid upgrading the installation on the hdd Hope this helps

    Read the article

  • RPi and Java Embedded: Hard-Float Support is Here!!!

    - by hinkmond
    You wanted Java Embedded with Hardware Floating Point support to install on a default Raspian environment for your Raspberry Pi? Well, you just got your wish. Merry Christmas! See: Developer JDK 8 for ARM w/Hard-Float Here's a quote: The Java SE 8 Developer Preview Release for ARM including JavaFX (JDK 8) on Linux has been made available at http://jdk8.java.net. The Developer Preview is provided to the community to get feedback on the ongoing progress of the project. Developers can start developing applications using this build of Java SE 8 on an ARM device, such as the a Raspberry Pi. It's a regular JDK (Java SE 8 preview) for your Raspberry Pi, so you should note this means there is a javac (and the other typical JDK tools) available to compile your Java apps right there on the device! Woot! I'll cover step-by-step instructions how to do that in a future blog post. Stay tuned... Hinkmond

    Read the article

  • How to add UTF-8 support to my hard disk in fstab?

    - by punkmexic
    I use Ubuntu (Spanish language). Sometimes I get this error when I use special characters (codification error) so I read that if I edit a file of my hard disk by using gedit /etc/fstab and adding utf8 I can fix it.... I had this line: UUID=bfb5b95e-bf68-464a-8abf-d6027b039fa4 / ext4 errors=remount-ro 0 1 I adeed utf8 like this: UUID=bfb5b95e-bf68-464a-8abf-d6027b039fa4 / ext4 errors=remount-ro,iocharset=utf8 0 1 But I messed my Ubuntu and I can't log in now to my Ubuntu so im using live session... so I'll have to remove that code in order to be able to use my Ubuntu again. Can someone tell me how that line should look like?

    Read the article

  • listing network shares with python

    - by Gearoid Murphy
    Hello, if I explicitly attempt to list the contents of a shared directory on a remote host using python on a windows machine, the operation succeeds, for example, the following snippet works fine: os.listdir("\\\\remotehost\\share") However, if I attempt to list the network drives/directories available on the remote host, python fails, an example of which is shown in the following code snippet: os.listdir("\\\\remotehost") Is anyone aware of why this doesn't work?, any help/workaround is appreciated.

    Read the article

  • Can I install Ubuntu 12.04 on MacBook 6.1 (Late 2009) on my external hard drive?

    - by tommywinarta
    I have no experience in installing Linux based OS on MacBook, but I already have Windows installed on my Mac. I read some articles saying I can have the Lucid Lynx installed in my Macbook 6.1 and luckily I already got the CD (which was distributed for free back then haha), my question is that can I have the 12.04 installed instead of the 10.04 and what do I need to do that? I also would like to know can I install it on my external hard drive just like installing it on a usb stick? I have viewed the how-to for installing the Lucid Lynx, is it just the same? Thanks in advance!

    Read the article

  • What kind of programs/solutions can only be written with OOP or are too hard to achieve without it?

    - by user1598390
    Paraphrasing a recent question: What is Object Oriented Programming ill-suited for? I would like to ask the opposite question: What kind of programs cannot be written unless you use OOP? What kind of programs are not recommended to be written using non-OOP techniques? What kind of programs need OOP in order to even be written? What kind of programs would be too hard to write without OOP ? The answer to this question can help sell the idea of OOP to project leaders that have no special interest in code quality. At least they could buy the idea if one shows them the kind of things that are not even possible unless you use OOP.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >