Search Results

Search found 7913 results on 317 pages for 'resource caching'.

Page 212/317 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • Cache Control Headers with IIS 7.5

    - by Brad
    I'm trying to wrap my head around client side (web browser) caching and how it works in relation to IIS 7.5 cache control headers. In particular: If we want to force clients to reload cached resources, how must IIS be configured? Do we need to set expire web content immediately if the resources on the server have a more recent Modified Date (or ETag value)? Right now we're not setting any cache headers. So if I set a cache header of no-cache (which I think is the equivalent of expire web content immediately) will that force the web browser to obtain a new version of a particular file. Or will the browser only request a new version after it deems its current copy to be stale and then from that point forward not cache it? Would a best practice be to set a cache control flag of 1 week, then 8 days before I know I am going to make a change set the cache control down to for instance 30 minutes? But if I do that and then need to immediately expire an item from users caches because there was an issue with it how do I do that?

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • Why does lighttpd keep static files in cache, even when modified on disk ?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Domain joining debate for Outlook 2010 with Exchange 2007 on windows SBS 2008 for a user on a laptop that will travel a fair amount of the time.

    - by user71195
    I'm basically debating on whether or not to join the Domain on a Laptop, and was wondering if anyone has had a similar experience. If the computer were staying in the office, its a no brainer. Join the domain. In this case I have a user who will come into the office a few days a week, and work remotely the rest of the time. There is a working VPN using OpenVPN client/server, but it's not site-to-site. My knee jerk reaction is to not join the domain, so that the user can have 1 profile that they always use. In this configuration, should Outlook work properly with the user's domain account, and should the shared calendar still work (at least once inside the VPN)? My concern with joining the domain would be the inability to login to it when elsewhere. Is there maybe a way around this with caching or something? Would creating a second local login make sense for a user like this in any way? If so, why not just skip the domain join to begin with? Any thoughts on or experiences with this would be appreciated. Laptop OS Windows 7 (Not purchased yet.. pro if domain needed) Server SBS 2008, Exchange 2007 Outlook version 2010 Thanks for any help, Mike

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • Why won't Google Chrome open Google Docs, and why is it slow with GMail?

    - by Philip
    OSX 10.6 Snow Leopard: Google Chrome works like a charm most of the time. I have many extensions installed, and I've tried disabling/re-enabling them one by one to find a culprit, with no luck. Here are the problems: (1) Chrome is slow to load GMail. I am fairly sure that clearing the cache alleviates this problem. But I can open Safari, type in the URL, and login to GMail in the time it sometimes takes Chrome to open the page. Shouldn't caching help the page load?!? And sometimes even after recent cache clears it still is slow to load..... Thoughts? (2) Chrome won't open Google Docs at all. Safari does. Again, I tried disabling extensions one-by-one, and I allow cookies and even third-party cookies. But I still am told it has a redirect loop. Any help would be much appreciated. Thanks!

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • Apache mod_disk_cache on a seperate drive

    - by pavs
    how can I set Apache mod_disk_cache on a separate drive from where the OS/Apache is installed? I have set up this on my apache2.conf: <IfModule mod_cache_disk.c> # cache cleaning is done by htcacheclean, which can be configured in # /etc/default/apache2 # # For further information, see the comments in that file, # /usr/share/doc/apache2/README.Debian, and the htcacheclean(8) # man page. # This path must be the same as the one in /etc/default/apache2 CacheRoot /media/cacheHD # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / # The result of CacheDirLevels * CacheDirLength must not be higher than # 20. Moreover, pay attention on file system limits. Some file systems # do not support more than a certain number of inodes and # subdirectories (e.g. 32000 for ext3) CacheDirLevels 2 CacheDirLength 1 </IfModule> and it doesn't seem to be caching anything at all. The drive itself is a freshly installed ssd drive formatted with ext4.

    Read the article

  • BYOD (accessing files) on a domain without joining?

    - by Philip White
    I run a Samba 4 instance at a small private school. This makes a regular Linux server appear as a directory controller. There are two relevant benefits to this: I have a Samba share for people's documents, and I use the Redirected Folders feature to allow any employee to sit down at any PC, log in with their domain credentials, and their My Documents points to network storage. Everyone has a mapped drive (using Group Policy Preferences) to a share specific to their account type. Students can access one share (one share for all students), teachers have another, and office staff have another. However, I would like to allow BYOD (Bring Your Own Device). Some employees are already asking for it with their personal laptops, and I know eventually most everyone will want to. Is there any way to replicate the two features above without having to join PCs to the domain? Joining personal PCs is impractical if only because only professional editions of Windows support this. Ideally, any operating system (including mobile) could access the relevant shares, but of course Windows is key. Offline caching is optional. (I could set up OpenVPN for teachers who want to access their files from home.) The problem with simply giving SSH access to the relevant shares is primarily that Samba 4 relies on ext4 ACLs and ext4 extended attributes to maintain NTFS permissions. Writing files directly to the Linux server would bypass this and would (probably) not be interoperable with Samba4. Right now I am completely flexible. I am even fine with scrapping the whole domain and using some other software for the two features above. How can I allow school employees and students freedom to securely share files without requiring everyone to have specific editions of Windows?

    Read the article

  • ??ORACLE(?):PMON Release Lock

    - by Liu Maclean(???)
    ?????Oracle????????????PMON???????,??????ORACLE PROCESS,??cleanup dead process????release enqueue lock ,???cleanup latch? ????????????????, ????????????Pmon cleanup dead process?release lock??????????? ??Oracle=> MicroOracle, Maclean???????????Oracle behavior: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE    11.2.0.3.0      Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> select pid,program  from v$process;        PID PROGRAM ---------- ------------------------------------------------          1 PSEUDO          2 [email protected] (PMON)          3 [email protected] (PSP0)          4 [email protected] (VKTM)          5 [email protected] (GEN0)          6 [email protected] (DIAG)          7 [email protected] (DBRM)          8 [email protected] (PING)          9 [email protected] (ACMS)         10 [email protected] (DIA0)         11 [email protected] (LMON)         12 [email protected] (LMD0)         13 [email protected] (LMS0)         14 [email protected] (RMS0)         15 [email protected] (LMHB)         16 [email protected] (MMAN)         17 [email protected] (DBW0)         18 [email protected] (LGWR)         19 [email protected] (CKPT)         20 [email protected] (SMON)         21 [email protected] (RECO)         22 [email protected] (RBAL)         23 [email protected] (ASMB)         24 [email protected] (MMON)         25 [email protected] (MMNL)         26 [email protected] (MARK)         27 [email protected] (D000)         28 [email protected] (SMCO)         29 [email protected] (S000)         30 [email protected] (LCK0)         31 [email protected] (RSMN)         32 [email protected] (TNS V1-V3)         33 [email protected] (W000)         34 [email protected] (TNS V1-V3)         35 [email protected] (TNS V1-V3)         37 [email protected] (ARC0)         38 [email protected] (ARC1)         40 [email protected] (ARC2)         41 [email protected] (ARC3)         43 [email protected] (GTX0)         44 [email protected] (RCBG)         46 [email protected] (QMNC)         47 [email protected] (TNS V1-V3)         48 [email protected] (TNS V1-V3)         49 [email protected] (Q000)         50 [email protected] (Q001)         51 [email protected] (GCR0) SQL> drop table maclean; Table dropped. SQL> create table maclean(t1 int); Table created. SQL> insert into maclean values(1); 1 row created. SQL> commit; Commit complete. ?????????, ?????????:PID=2  PMONPID=11 LMONPID=18 LGWRPID=20 SMONPID=12 LMD ??????2???”enq: TX – row lock contention”?????,???KILL??????,??????PMON?recover dead process?release TX lock: PROCESS A: QL> select addr,spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); ADDR             SPID                            PID ---------------- ------------------------ ---------- 00000000BD516B80 17880                            46 SQL> select distinct sid from v$mystat;        SID ----------         22 SQL> update maclean set t1=t1+1; 1 row updated. PROCESS B SQL> select addr,spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); ADDR             SPID                            PID ---------------- ------------------------ ---------- 00000000BD515AD0 17908                            45 SQL> update maclean set t1=t1+1; HANG.............. PROCESS B ??"enq: TX – row lock contention"?HANG? ????PROCESS C?? ?SMON?10500 event trace ??PMON?KST TRACE: SQL> set linesize 200 pagesize 1400 SQL> select * from v$lock where sid=22; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 00000000BDCD7618 00000000BDCD7670         22 AE        100          0          4          0         48          2 00007F63268A9E28 00007F63268A9E88         22 TM      77902          0          3          0         32          2 00000000B9BB4950 00000000B9BB49C8         22 TX     458765        892          6          0         32          1 PROCESS A holde?ENQUEUE LOCK??? AE?TM?TX SQL> alter system switch logfile; System altered. SQL> alter system checkpoint; System altered. SQL> alter system flush buffer_cache; System altered. SQL> alter system set "_trace_events"='10000-10999:255:2,20,33'; System altered. SQL> ! kill -9 17880 KILL PROCESS A ???PROCESS B??update ?PMON ? PROCESS B ?errorstack ?KST TRACE????? SQL> oradebug setorapid 2; Oracle pid: 2, Unix process pid: 17533, image: [email protected] (PMON) SQL> oradebug dump errorstack 4; Statement processed. SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_pmon_17533.trc SQL> oradebug setorapid 45; Oracle pid: 45, Unix process pid: 17908, image: [email protected] (TNS V1-V3) SQL> oradebug dump errorstack 4; Statement processed. SQL>oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_ora_17908.trc ??PMON? KST TRACE: 2012-05-18 10:37:34.557225 :8001ECE8:db_trace:ktur.c@5692:ktugru(): [10444:2:1] next rollback uba: 0x00000000.0000.00 2012-05-18 10:37:34.557382 :8001ECE9:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=18 num=4 loc='ksa2.h LINE:285 ID:ksasnd' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.557514 :8001ECEA:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TX-0007000d-0000037c mode=X 2012-05-18 10:37:34.558819 :8001ECF0:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=45 num=5 loc='kji.h LINE:3418 ID:kjata: wake up enqueue owner' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.559047 :8001ECF8:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=12 num=6 loc='kjm.h LINE:1224 ID:kjmpost: post lmd' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.559271 :8001ECFC:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.559291 :8001ECFD:db_trace:ktu.c@8652:ktudnx(): [10813:2:1] ktudnx: dec cnt xid:7.13.892 nax:0 nbx:0 2012-05-18 10:37:34.559301 :8001ECFE:db_trace:ktur.c@3198:ktuabt(): [10444:2:1] ABORT TRANSACTION - xid: 0x0007.00d.0000037c 2012-05-18 10:37:34.559327 :8001ECFF:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TM-0001304e-00000000 mode=SX 2012-05-18 10:37:34.559365 :8001ED00:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.559908 :8001ED01:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release AE-00000064-00000000 mode=S 2012-05-18 10:37:34.559982 :8001ED02:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.560217 :8001ED03:db_trace:ksfd.c@15379:ksfdfods(): [10298:2:1] ksfdfods:fob=0xbab87b48 aiopend=0 2012-05-18 10:37:34.560336 :GSIPC:kjcs.c@4876:kjcsombdi(): GSIPC:SOD: 0xbc79e0c8 action 3 state 0 chunk (nil) regq 0xbc79e108 batq 0xbc79e118 2012-05-18 10:37:34.560357 :GSIPC:kjcs.c@5293:kjcsombdi(): GSIPC:SOD: exit cleanup for 0xbc79e0c8 rc: 1, loc: 0x303 2012-05-18 10:37:34.560375 :8001ED04:db_trace:kss.c@1414:kssdch(): [10809:2:1] kssdch(0xbd516b80 = process, 3) 1 0 exit 2012-05-18 10:37:34.560939 :8001ED06:db_trace:kmm.c@10578:kmmlrl(): [10257:2:1] KMMLRL: Entering: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.561091 :8001ED07:db_trace:kmm.c@10472:kmmlrl_process_events(): [10257:2:1] KMMLRL: Events: succ(3) wait(0) fail(0) 2012-05-18 10:37:34.561100 :8001ED08:db_trace:kmm.c@11279:kmmlrl(): [10257:2:1] KMMLRL: Reg/update: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.563325 :8001ED0B:db_trace:kmm.c@12511:kmmlrl(): [10257:2:1] KMMLRL: Update: ret(0) 2012-05-18 10:37:34.563335 :8001ED0C:db_trace:kmm.c@12768:kmmlrl(): [10257:2:1] KMMLRL: Exiting: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.563354 :8001ED0D:db_trace:ksl2.c@2598:kslwtbctx(): [10005:2:1] KSL WAIT BEG [pmon timer] 300/0x12c 0/0x0 0/0x0 wait_id=78 seq_num=79 snap_id=1 PMON??dead process A??????????TX Lock:ksqrcl: release TX-0007000d-0000037c mode=X ?????Post Process B,??Process B ?acquire?TX lock???????:KSL POST SENT postee=45 num=5 loc=’kji.h LINE:3418 ID:kjata: wake up enqueue owner’ id1=0 id2=0 name=   type=0 Process B???PMON??????????ksl2.c@14563:ksliwat(): [10005:45:151] KSL POST RCVD poster=2 num=5 loc=’kji.h LINE:3418 ID:kjata: wake up enqueue owner’ id1=0 id2=0 name=   type=0 fac#=3 posted=0×3 may_be_posted=1kslwtbctx(): [10005:45:151] KSL WAIT BEG [latch: ges resource hash list] 3162668560/0xbc827e10 91/0x5b 0/0×0 wait_id=14 seq_num=15 snap_id=1kslwtectx(): [10005:45:151] KSL WAIT END [latch: ges resource hash list] 3162668560/0xbc827e10 91/0x5b 0/0×0 wait_id=14 seq_num=15 snap_id=1 ?RAC????POST LMD(lock Manager)??,????????GES??:2012-05-18 10:37:34.559047 :8001ECF8:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=12 num=6 loc=’kjm.h LINE:1224 ID:kjmpost: post lmd’ id1=0 id2=0 name=   type=0 ??ksqrcl: release TX????????:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??PMON abort Process A???Transaction2012-05-18 10:37:34.559291 :8001ECFD:db_trace:ktu.c@8652:ktudnx(): [10813:2:1] ktudnx: dec cnt xid:7.13.892 nax:0 nbx:02012-05-18 10:37:34.559301 :8001ECFE:db_trace:ktur.c@3198:ktuabt(): [10444:2:1] ABORT TRANSACTION – xid: 0×0007.00d.0000037c ??Process A?????maclean??TM lock:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TM-0001304e-00000000 mode=SXksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??Process A?????AE ( Prevent Dropping an edition in use) lock:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release AE-00000064-00000000 mode=Sksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??cleanup process Akjcs.c@4876:kjcsombdi(): GSIPC:SOD: 0xbc79e0c8 action 3 state 0 chunk (nil) regq 0xbc79e108 batq 0xbc79e118GSIPC:kjcs.c@5293:kjcsombdi(): GSIPC:SOD: exit cleanup for 0xbc79e0c8 rc: 1, loc: 0×303kss.c@1414:kssdch(): [10809:2:1] kssdch(0xbd516b80 = process, 3) 1 0 exit 0xbd516b80??PROCESS A ?paddr ???? kssdch???????? ??process???state object SO KSS: delete children of state obj. PMON ??kmmlrl()????instance goodness??update for session drop deltakmmlrl(): [10257:2:1] KMMLRL: Entering: flg(0×0) rflg(0×4)kmmlrl_process_events(): [10257:2:1] KMMLRL: Events: succ(3) wait(0) fail(0)kmmlrl(): [10257:2:1] KMMLRL: Reg/update: flg(0×0) rflg(0×4)kmmlrl(): [10257:2:1] KMMLRL: Update: ret(0)kmmlrl(): [10257:2:1] KMMLRL: Exiting: flg(0×0) rflg(0×4) ????????PMON???? 3s???”pmon timer”??kslwtbctx(): [10005:2:1] KSL WAIT BEG [pmon timer] 300/0x12c 0/0×0 0/0×0 wait_id=78 seq_num=79 snap_id=1

    Read the article

  • Safari 5 Extension XMLHttpRequest Error: INVALID_STATE_ERR: DOM Exception 11

    - by unknowndomain
    I am experimenting with the new Safari 5 extensions JS API and I am having an issue right from the ground up, I want to use an XMLHttpRequest to get an RSS feed from a website however upon the .send() it immediatly kicks off errors: Failed to load resource: cancelled Then looking at the XMLHttpRequest object is says in status: Error: INVALID_STATE_ERR: DOM Exception 11 I don't know why but this is my code, I hope I can get some advice as to whats going wrong: var xml = new XMLHttpRequest(); xml.open('GET', 'http://year3.gdnm.org/feed/'); xml.send(); Thanks in advance.

    Read the article

  • Command /usr/bin/ codesign failed with exit code 1

    - by sarmenhba
    i was playing with the keychain certificates i wanted to remove them all and do it all again so that i learn how to do it. i have a simple app it worked perfectly before i started playing with the certificates so the code is fine. when i tried to compile my app to sent it to my iphone device it would give the error you see in the title. i looked at the log and here is what i see: Build Untitled of project Untitled with configuration Debug CodeSign build/Debug-iphoneos/Untitled.app cd /Users/sarmenhb/Desktop/myapp/Untitled setenv IGNORE_CODESIGN_ALLOCATE_RADAR_7181968 /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /usr/bin/codesign -f -s "iPhone Developer: sarm bo (2ZDTN5FTAL)" --resource-rules=/Users/sarmenhb/Desktop/myapp/Untitled/build/Debug-iphoneos/Untitled.app/ResourceRules.plist --entitlements /Users/sarmenhb/Desktop/myapp/Untitled/build/Untitled.build/Debug-iphoneos/Untitled.build/Untitled.xcent /Users/sarmenhb/Desktop/myapp/Untitled/build/Debug-iphoneos/Untitled.app iPhone Developer: sarm bo (2ZDTN5FTAL): no identity found Command /usr/bin/codesign failed with exit code 1 how do i fix this?

    Read the article

  • Is SiteCore slow and buggy?

    - by Larsenal
    I've seen plenty of negative comments regarding performance and general bugginess. However, to be fair, most of these look like they were within the v5.3 timeframe. Have they fixed all of those issues in v6.0? Is it an excellent product? Some examples of the complaints: Maybe it’s just a case of user error, but this guy says, “some pages take as much as 20s to render…” Source Here, in the comments, one fellow remaks, “Sitecore backend is incredible slow. Sitecore developement is really pain, it takes from 2 minutes to start sitecore and many many seconds to do small backend operations. They claim to have a quick client, but that is a BIG LIE. All developers in my company really hate sitecore for being so slow.” Source Another search yielded, “Sitecore’s users listed three issues as number one: licensing, the server as a resource hog, and the site’s slow responsiveness.” Source

    Read the article

  • creating the icalendar feed and accessing it via webcal: protocal

    - by Sagar
    Hi i completed creating i calendar feed in asp.net mvc.Basically the op is the file with .ics extensions.I am able to open my file in mozilla sunbird(calendar reader software) and view the milestones lists.Now when i want to open it with google calendar i get an error.How can i synchronize mi ical file with google calendar.Do i need to use webcal:\ protocol to achive that.Basically my feed link should apper some thing like this webcal://proj2009.basecamphq.com/feed/global_ical?token=457bd123e18d instead of controller/action/id(which i have now).There aint enough resource on the web for this one.Anyone pls help. Thanks in Advance.

    Read the article

  • Android ListView with alternate color and on focus color

    - by Yogesh
    I need to set alternate color in list view rows but when i do that it removes/ disables the on focus default yellow background I tried with backgroundColor rowView.setBackgroundColor(SOME COLOR); also with backgrounddrwable. rowView.setBackgroundColor(R.drawable.view_odd_row_bg); <!-- Even though these two point to the same resource, have two states so the drawable will invalidate itself when coming out of pressed state. --> <item android:state_focused="true" android:state_enabled="false" android:state_pressed="true" android:drawable="@color/highlight" /> <item android:state_focused="true" android:state_enabled="false" android:drawable="@color/highlight" /> <item android:state_focused="true" android:state_pressed="true" android:drawable="@color/highlight" /> <item android:state_focused="false" android:state_pressed="true" android:drawable="@color/highlight" /> <item android:state_focused="true" android:drawable="@color/highlight" /> but it wont work. is there any way we can set background color and on focus color simultaneously which will work.

    Read the article

  • How to use "SelectMany" with DataServiceQuery<>

    - by sako73
    I have the following DataServiceQuery running agaist an ADO Data Service (with the update installed to make it run like .net 4): DataServiceQuery<Account> q = (_gsc.Users .Where(c => c.UserId == myId) .SelectMany(c => c.ConsumerXref) .Select(x => x.Account) .Where(a => a.AccountName == "My Account" && a.IsActive) .Select(a => a)) as DataServiceQuery<Account>; When I run it, I get an exception: Cannot specify query options (orderby, where, take, skip) on single resource As far as I can tell, I need to use a version of "SelectMany" that includes an additonal lambda expression (http://msdn.microsoft.com/en-us/library/bb549040.aspx), but I am not able to get this to work correctly. Could someone show me how to properly structure the "SelectMany" call? Thank you for any help.

    Read the article

  • How to access localhost websites through http request from blackberry simulator?

    - by SIA
    Hi Everybody I am developing a blackberry application and i wanted to access the websites from my localhost( local machine). I am running the application on blackberry simulator 8350. From my code i can browse request any website from internet and i am getting the response. When i am trying to give the url as localhost:8080/portal/index.php, its displaying me a erro message HTTP Error 404 description The requested resource (/portal/index.php) is not available. I am running my apache webserver on port 8080 over windows. How can i access my local machine website from blackberry simulator? Please help and guide me. Thanks SIA

    Read the article

  • The directory or file specified does not exist on the Web server.

    - by Guy
    I have a hybrid asp.net web forms / mvc application that I recently converted to .net 4 with mvc2. I have set-up that application to run on IIS 7.5 (on Windows 7) and the web forms part of the site is running okay but the MVC part is not. Whenever I try and access a page that needs to go through the routing engine I get HTTP Error 404.0 - Not Found The resource you are looking for has been removed, had its name changed, or is temporarily unavailable. Module IIS Web Core Notification MapRequestHandler Handler StaticFile Error Code 0x80070002 I'm debugging this web site through VS2010 (so I've set-it-up to use IIS instead of Cassini) and when I put a break point in the Application_Start function it is never hit so the routes are never registered. When I put a break point in the Page_Load function in one of the aspx page code-behinds it gets hit. So it seems that the problem is that the route is not being registered. What am I missing?

    Read the article

  • Parser Error Message: Could not load type 'TestMvcApplication.MvcApplication'

    - by Riaan Engelbrecht
    I am getting the following error on one of our production servers. Not sure why it is working on the DEV server? Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'TestMvcApplication.MvcApplication'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.cs" Inherits="TestMvcApplication.MvcApplication" Language="C#" % Source File: /global.asax Line: 1 Not sure if anybody came across this error before and how it was solved, but I have reached the end. Any help would be appreciated. I also need to mention that this is the published code, so all is compiled. Can there be something wrong with my compiler settings?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >