Search Results

Search found 90459 results on 3619 pages for 'server cache'.

Page 248/3619 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Controlling ASP.NET output cache memory usage

    - by Josh Einstein
    I would like to use output caching with WCF Data Services and although there's nothing specifically built in to support caching, there is an OnStartProcessingRequest method that allows me to hook in and set the cacheability of the request using normal ASP.NET mechanisms. But I am worried about the worker process getting recycled due to excessive memory consumption if large responses are cached. Is there a way to specify an upper limit for the ASP.NET output cache so that if this limit is exceeded, items in the cache will be discarded? I've seen the caching configuration settings but I get the impression from the documentation that this is for explicit caching via the Cache object since there is a separate outputCacheSettings which has no memory-related attributes. Here's a code snippet from Scott Hanselman's post that shows how I'm setting the cacheability of the request. protected override void OnStartProcessingRequest(ProcessRequestArgs args) { base.OnStartProcessingRequest(args); //Cache for a minute based on querystring HttpContext context = HttpContext.Current; HttpCachePolicy c = HttpContext.Current.Response.Cache; c.SetCacheability(HttpCacheability.ServerAndPrivate); c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60)); c.VaryByHeaders["Accept"] = true; c.VaryByHeaders["Accept-Charset"] = true; c.VaryByHeaders["Accept-Encoding"] = true; c.VaryByParams["*"] = true; }

    Read the article

  • How to cache queries in Rails across multiple requests

    - by m.u.sheikh
    I want to cache query results so that the same results are fetched "for more than one request" till i invalidate the cache. For instance, I want to render a sidebar which has all the pages of a book, much like the index of a book. As i want to show it on every page of the book, I have to load it on every request. I can cache the rendered sidebar index using action caching, but i also want to actually cache the the query results which are used to generate the html for the sidebar. Does Rails provide a way to do it? How can i do it?

    Read the article

  • How do I get ELMAH to work with SQL Server (permission problems)

    - by Gary McGill
    I've got ELMAH working on my (Cassini) development server, and was quite happy with it, but now that I'm trying to move everything to my production server (IIS7), the honeymoon looks like being over. I've got past the "gotcha" with IIS7, which frankly could have been better highlighted in the documentation, and if I just use the in-memory log then it works. However, I'm trying to get it to use the SQL Server log (as I do on my development system), and I'm getting an error along the lines of: The EXECUTE permission was denied on the object ELMAH_GetErrorsXml Well, fine. I know how to grant database permissions, but I'm really struggling to understand which user and which stored procs/tables I need to grant access to. The thing that's really confusing me is that I didn't have to do anything like this to get it to work on my development server. The only difference I can see is that on my development server it seems to connect as NT AUTHORITY\IUSR, whereas on my production server it seems to connect as NT AUTHORITY\NETWORK SERVICE. (It's just using a trusted connection so I've not explicitly configured it to do that - I presume it's to do with the web server). UPDATE: I've since established that because I'm using Cassini, it was actually logging in as me (an admin) and not IUSR, which explains why I didn't get any permission problems. On my development server, the IUSR account is a member of the public database role, and has access to the required database (again as "public"). There's no explicit granting of object-level permissions. [See update above - this is irrelevant]. On my production server, I've added NETWORK SERVICE in exactly the same way (public database role, explicit access to the database as "public"). Yet, I get this permission error. Why?!! [See update above - the only reason I don't get a permission error is because I'm running as an admin]. And, of course, if the fact that it works locally is just "luck", I will need to know which SPs/tables to grant access to. My guess would be all 3 SPs and not the table, but it would be good (again) to see some documentation that makes this explicit.

    Read the article

  • Working with a Java Mail Server for Testing

    - by Charlie
    I'm in the process of testing an application that takes mail out of a mailbox, performs some action based on the content of that mail, and then sends a response mail depending on the result of the action. I'm looking for a way to write tests for this application. Ideally, I'd like for these tests to bring up their own mail server, push my test emails to a folder on this mail server, and have my application scrape the mail out of the mail server that my test started. Configuring the application to use the mailserver is not difficult, but I do not know where to look for a programatic way of starting a mail server in Java. I've looked at JAMES, but I am unable to figure out how to start the server from within my test. So the question is this: What can I use for a mail server in Java that I can configure and start entirely within Java?

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • Server cost/requirements for a web site with thousands of concurrent users?

    - by Angelus
    I'm working on a big project, and I do not have much experience with servers and how much they cost. The big project consist of a new table game for online playing and betting. Basically, a poker server that must be responsive with thousands of concurrent users. What type of server must i look for? What features, hardware or software, are required? Should I consider cloud computing? thank you in advance.

    Read the article

  • Squid: caching *.swf with variables

    - by stfn
    I'd recently upgraded my Ubuntu 11.10 x64 server to 12.04. In this process Squid was updated from 2.7 to 3.1. Squid 3.1 has many different options witch broke my setup. So I completely removed squid 2.7 and 3.1 and started from scratch. Everything is now working as before except for 1 thing: caching of .swf files with ?/variables. Squid 3 sees a ? as dynamic content and does not cache it. For example, Squid 2.7 was caching the .swf file at http://ninjakiwi.com/Games/Tower-Defense/Play/Bloons-Tower-Defense-5.html and 3.1 is not. <object id="mov" name="movn" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="800" height="620"> <param name="movie" value="http://www.ninjakiwifiles.com/Games/gameswfs/btd5.swf?v=160512-2"> <param name="allowscriptaccess" value="always"> <param name="bgcolor" value="#000000"> <param name="flashvars" value="file=http://www.ninjakiwifiles.com/Games/gameswfs/btd5-dat.swf?v=280512"> <p>Get Flash play Ninja Kiwi games.</p> </object> It is because of the "?v=160512-2" and "?v=280512" part. This line should be responsible for that: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 But disabling it still doesn't cache the .swf files. How do I configure Squid 3.1 to cache those files? My current config is: acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT acl localnet src 192.168.2.0-192.168.2.255 acl localnet src 192.168.3.0-192.168.3.255 http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all http_port 3128 cache_dir ufs /var/spool/squid 10240 16 256 maximum_object_size 100 MB coredump_dir /var/spool/squid3 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.index.(html|htm)$ 0 40% 10080 refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320 refresh_pattern Packages\.bz2$ 0 20% 4320 refresh-ims refresh_pattern Sources\.bz2$ 0 20% 4320 refresh-ims refresh_pattern Release\.gpg$ 0 20% 4320 refresh-ims refresh_pattern Release$ 0 20% 4320 refresh-ims refresh_pattern . 0 40% 40320 cache_effective_user proxy cache_effective_group proxy

    Read the article

  • What's the fastest way to check the availability of a SQL Server server?

    - by mwolfe02
    I have an MS Access program in use in multiple locations. It connects to MS SQL Server tables, but the server name is different in each location. I am looking for the fastest way to test for the existence of a server. The code I am currently using looks like this: ShellWait "sc \\" & ServerName & " qdescription MSSQLSERVER > " & Qt(fn) FNum = FreeFile() Open fn For Input As #FNum Line Input #FNum, Result Close #FNum Kill fn If InStr(Result, "SUCCESS") Then ... ShellWait: executes a shell command and waits for it to finish Qt: wraps a string in double quotes fn: temporary filename variable I run the above code against a list of server names (of which only one is normally available). The code takes about one second if the server is available and takes about 8 seconds for each server that is unavailable. I'd like to get both of these lower, if possible, but especially the fail case as this one happens most often.

    Read the article

  • HTML5 Web Storage Cleared when Browser Clear Cache?

    - by jiewmeng
    i wonder if HTML5 Web Storage will be cleared when browser clears its cache? if it did, many ppl like me may lose data if i accidentally clear cache? or like in this comment ... Since HTML5 local storage is kept separate from js cookies (like Silverlight, Gears, Flash), it opens up a world of 3rd party privacy issues for HTML5 as these objects will likely NOT get deleted with a clear cache or delete temporary data ... where web storage is not cleared, but leads to privacy issues?

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Castle Active Record - Working with the cache

    - by David
    Hi All, im new to the Castle Active Record Pattern and Im trying to get my head around how to effectivley use cache. So what im trying to do (or want to do) is when calling the GetAll, find out if I have called it before and check the cache, else load it, but I also want to pass a bool paramter that will force the cache to clear and requery the db. So Im just looking for the final bits. thanks public static List<Model.Resource> GetAll(bool forceReload) { List<Model.Resource> resources = new List<Model.Resource>(); //Request to force reload if (forceReload) { //need to specify to force a reload (how?) XmlConfigurationSource source = new XmlConfigurationSource("appconfig.xml"); ActiveRecordStarter.Initialize(source, typeof(Model.Resource)); resources = Model.Resource.FindAll().ToList(); } else { //Check the cache somehow and return the cache? } return resources; } public static List<Model.Resource> GetAll() { return GetAll(false); }

    Read the article

  • SQL Server Max SmallInt Value

    - by Derek Dieter
    The maximum value for a smallint in SQL Server is: -32768 through 32767 And the byte size is: 2 bytes other maximum values: BigInt: -9223372036854775808 through 9223372036854775807 (8 bytes) Int: -2147483648 through 2147483647 (4 bytes) TinyInt: 0 through 255 (1 byte) Related Posts:»SQL Server Max TinyInt Value»SQL Server Max Int Value»SQL Server Bigint Max Value»Dynamic Numbers Table»Troubleshooting SQL Server Slowness

    Read the article

  • C# program to switch updating from Master server to Slave server

    - by tanthiamhuat
    assuming that I have setup the Database (MySQL) Replication using Master-Slave configuration, and have synchronized those Master and Slave servers, how can my C# program know that it has to update the Slave server when the Master server fails? What are the conditions that the C# program switch from the Master server to the Slave server? I am using MySQL server.

    Read the article

  • SQL server 2005 remote connection problem, cannot solve it help please thank you

    - by user287745
    note:- if this question does not fit this site please do not just close it but also redirect the question to the fitting sister site, thank you" the steps taken and the error are mentioned please help, i am stuck here! installed sql server 2005 express on both computers installed sql server management studio express on both computers ran each management studio and connect to instance sqlserver using windows authentication ( one computer connection example "A-63A9D4D7E7834\SQLEXPRESS" ) created a database in the databases named as "test1" created a few tables with data saved and exit. did everything what this site says " How to configure SQL Server 2005 to allow remote connections" [add h t t p here as spam prevention] ://support.microsoft.com/kb/914277/en-us" but i have just disable the firewalls completely :turn off connecting to A-63A9D4D7E7834 started "SQL Server Management Studio Express" on computer A-63A9D4D7E7834 sever name: "ALL-E425BE6C41D\SQLEXPRESS" authentication: "windows authentication" and CONNECT I GET THE FOLLOWING ERROR Cannot connect to ALL-E425BE6C41D\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed for user 'ALL-E425BE6C41D\Guest'. (Microsoft SQL Server, Error: 18456) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18456&LinkId=20476 BUTTONS: OK HELP

    Read the article

  • How to implement SHA-2 in SQL Server 2005 or 2008 with a CLR assembly

    SQL Server 2012 supports SHA-256 and SHA-512 through the HASHBYTES() function, but earlier versions of SQL Server do not. SHA-256, SHA-384 and SHA-512 can, however, be implemented in SQL Server 2005 or SQL Server 2008 with the CLR assembly described in this article. Optimize SQL Server performance“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.

    Read the article

  • SQL Server: bcp utility: login fails

    - by Patrick
    Microsoft Windows [Version 5.2.3790] (C) Copyright 1985-2003 Microsoft Corp. C:\Documents and Settings\Administrator>bcp "SELECT TOP 1000 * FROM SOData.dbo.E xperts" queryout c:\customer3.txt -n -t -UAdministrator -P -SDNAWINDEV SQLState = 28000, NativeError = 18456 Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for u ser 'Administrator'. or.. without -P flag C:\Documents and Settings\Administrator>bcp "SELECT TOP 1000 * FROM SOData.dbo.E xperts" queryout c:\customer3.txt -n -t -UAdministrator -P -SDNAWINDEV SQLState = 28000, NativeError = 18456 Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for u ser 'Administrator'. or, without -P flag, and typing the password.. is the same C:\Documents and Settings\Administrator>bcp "SELECT TOP 1000 * FROM SOData.dbo.E xperts" queryout c:\customer3.txt -n -t -UAdministrator -SDNAWINDEV Password: SQLState = 28000, NativeError = 18456 Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for u ser 'Administrator'.

    Read the article

  • Linux arp cache timeout values

    - by Jak
    I'm trying to configure sane values for the Linux kernel arp cache timeout, but I can't find a detailed explanation as to how they work anywhere. Even the kernel.org documentation doesn't give a good explanation, I can only find recommended values to alleviate overflow. Here is an example of the values I have: net.ipv4.neigh.default.gc_thresh1 = 128 net.ipv4.neigh.default.gc_thresh2 = 512 net.ipv4.neigh.default.gc_thresh3 = 1024 Now, from what I've gathered so far: gc_thresh1 is the number of arp entries allowed before the garbage collector starts removing any entries at all. gc_thresh2 is the soft-limit, which is the number of entries allowed before the garbage collector actively removes arp entries. gc_thresh3 is the hard limit, where entries above this number are aggressively removed. Now, if I understand correctly, if the number of arp entries goes beyond gc_thresh1 but remains below gc_thresh2, the excess will be removed periodically with an interval set by gc_interval. My question is, if the number of entries goes beyond gc_thresh2 but below gc_thresh3, or if the number goes beyond gc_thresh3, how are the entries removed? In other words, what does "actively" and "aggressively" removed mean exactly? I assume it means they are removed more frequently than what is defined in gc_interval, but I can't find by how much.

    Read the article

  • SQL Server 2008 - cannot register default instance MSSQLSERVER

    - by Paul Moss
    Hello, I have installed SQL Server 2008 Developer on Windows 7 64 bit. In Management Studio I cannot register the default instance MSSQLSERVER, it cannot find it although the service is running. I get the message: Cannot connect to PHOENIX\MSSQLSERVER. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid) (Microsoft SQL Server, Error: 87) However Management Studio does show the SQL Server 2005 Express instance (that was installed with VS 2008 Pro) which appeared as already registered. I an using Windows Authentication as I installed it in mixed mode. Any ideas would be appreciated, many thanks paul

    Read the article

  • Checking server load with PHP and taking appropriate action

    - by teehoo
    Hi, I'm creating a project in which a server receives operations from clients to apply to a local server document. The server and client both share the same document and therefore each message the client sends contains an MD5 hash, which the server compares to after generating its own hash to ensure the server and client documents are synchronized. My question is, if the server is overloaded, could I somehow detect this in PHP, which would in turn let me decide whether I want to execute the hash generation function or not? Perhaps in the scenario defined, this is not a perfect use-case, but I'm interested in this approach in general.

    Read the article

  • Why Is it better to use unreadable bytes for client server communication?

    - by Alessa
    I'm composing communication lyrics for client-server and what am I thinking about: "authme username passord" (maybe encrytped) "accept" "get archive of H2O from 03.02.2005 to 20.12.2064" transferring binary structure or "error descrtiption" why I always need to do something like 0x0FA52FD + CRC 0x0D34423 + CRC ... I can see some secure reasons but I think it's not the real reason so why I can't use strings in client-server communication?

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >