Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 305/1205 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Ideas on implementing threads and cross process communication. - C

    - by Jamie Keeling
    Hello all! I have an application consisting of two windows, one communicates to the other and sends it a struct constaining two integers (In this case two rolls of a dice). I will be using events for the following circumstances: Process a sends data to process b, process b displays data Process a closes, in turn closing process b Process b closes a, in turn closing process a I have noticed that if the second process is constantly waiting for the first process to send data then the program will be just sat waiting, which is where the idea of implementing threads on each process occured. I have already implemented a thread on the first process which currently creates the data to send to the second process and makes it available to the second process. The problem i'm having is that I don't exactly have a lot of experience with threads and events so I'm not sure of the best way to actually implement what I want to do. Following is a small snippet of what I have so far in the producer application; Rolling the dice and sending the data: case IDM_FILE_ROLLDICE: { hDiceRoll = CreateThread( NULL, // lpThreadAttributes (default) 0, // dwStackSize (default) ThreadFunc(hMainWindow), // lpStartAddress NULL, // lpParameter 0, // dwCreationFlags &hDiceID // lpThreadId (returned by function) ); } break; The data being sent to the other process: DWORD WINAPI ThreadFunc(LPVOID passedHandle) { HANDLE hMainHandle = *((HANDLE*)passedHandle); WCHAR buffer[256]; LPCTSTR pBuf; LPVOID lpMsgBuf; LPVOID lpDisplayBuf; struct diceData storage; HANDLE hMapFile; DWORD dw; //Roll dice and store results in variable storage = RollDice(); hMapFile = CreateFileMapping( (HANDLE)0xFFFFFFFF, // use paging file NULL, // default security PAGE_READWRITE, // read/write access 0, // maximum object size (high-order DWORD) BUF_SIZE, // maximum object size (low-order DWORD) szName); // name of mapping object if (hMapFile == NULL) { dw = GetLastError(); MessageBox(hMainHandle,L"Could not create file mapping object",L"Error",MB_OK); return 1; } pBuf = (LPTSTR) MapViewOfFile(hMapFile, // handle to map object FILE_MAP_ALL_ACCESS, // read/write permission 0, 0, BUF_SIZE); if (pBuf == NULL) { MessageBox(hMainHandle,L"Could not map view of file",L"Error",MB_OK); CloseHandle(hMapFile); return 1; } CopyMemory((PVOID)pBuf, &storage, (_tcslen(szMsg) * sizeof(TCHAR))); //_getch(); MessageBox(hMainHandle,L"Completed!",L"Success",MB_OK); UnmapViewOfFile(pBuf); return 0; } I'd like to think I am at least on the right lines, although for some reason when the application finishes creating the thread it hits the return DefWindowProc(hMainWindow, message, wParam, lParam); it crashes saying there's no more source code for the current location. I know there are certain ways to implement things but as I've mentioned I'm not sure if i'm doing this the right way, has anybody else tried to do the same thing? Thanks!

    Read the article

  • Primary reasons why programming language runtimes use stacks?

    - by manuel aldana
    Many programming language runtime environments use stacks as their primary storage structure (e.g. see JVM bytecode to runtime example). Quickly recalling I see following advantages: Simple structure (pop/push), trivial to implement Most processors are anyway optimized for stack operations, so it is very fast Less problems with memory fragmentation, it is always about moving memory-pointer up and down for allocation and freeing complete blocks of memory by resetting the pointer to the last entry offset. Is the list complete or did I miss something? Are there programming language runtime environments which are not using stacks for storage at all?

    Read the article

  • wanting a good memory + disk caching solution

    - by brofield
    I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is: memcached semantics (i.e. not reliable, just a cache) memcached api preferred (but not required) large in-memory first level cache (MRU) huge on-disk second level cache (main) evicted from on-disk cache at maximum storage using LRU or LFU proven implementation In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either: other options that I haven't considered a way to make memcachedb do evictions Already considered are: memcachedb best fit but doesn't do evictions: explicitly "not a cache" can't see any way to do evictions (either manual or automatic) tugela cache abandoned, no support don't want to recommend it to customers nmdb doesn't use memcache api new and unproven don't want to recommend it to customers

    Read the article

  • Best way to perform authentication on every request

    - by Nik
    Hello. In my asp.net mvc 2 app, I'm wondering about the best way to implement this: For every incoming request I need to perform custom authorization before allowing the file to be served. (This is based on headers and contents of the querystring. If you're familiar with how Amazon S3 does rest authentication - exactly that). I'd like to do this in the most perfomant way possible, which probably means as light a touch as possible, with IIS doing as much of the actual work as possible. The service will need to handle GET requests, as well as writing new files coming in via POST/PUT requests. The requests are for an abitrary file, so it could be: GET http://storage.foo.com/bla/egg/foo18/something.bin POST http://storage.foo.com/else.txt Right now I've half implemented it using an IHttpHandler which handles all routes (with routes.RouteExistingFiles = true), but not sure if that's the best, or if I should be hooking into the lifecycle somewhere else? Many thanks for any pointers. (IIS7)

    Read the article

  • How to make multi-function linux device work in windows

    - by Naze Kimi
    I was able to compile my Linux device to a composite gadget.(Serial + Mass Storage) When I plug this device on a Linux PC, The OS was able to detect and use both function. But when I plug it on Windows, it is just detected as a "Multifunction Composite Gadget" and I can't use it as neither a Mass Storage or a Serial Device. How do I go about making this work in Windows. Is making a customized driver really essential for this task? If so, how is this accomplished the least "painful" way?

    Read the article

  • access models and forms within modules

    - by sims
    Hi Stackers, What is the best way to access my models and forms from a controller of a module? Let's explain with "pictures": /application/module/storage/controllers/IndexController.php needs to call readAction in the class called storage_Model_Files in /application/module/storage/models/Files.php I've made this app's dir structure and these forms and models with zf.sh (Zend_Tool). I've read about all sorts of ways of manually including these files. I want to lazy load them much like everything is done automatically with the default module. I can't seem to find how in the docs. Does that make sense? I have: resources.frontController.moduleDirectory = APPLICATION_PATH "/modules" in my application.ini file. So I can access my controllers fine. Thanks for your help!

    Read the article

  • Large file download for a Rails project

    - by Horace Ho
    One client project will be online two months later. One of the requirements changed is to support large files (10 to 15MB per RAW camera file, expected 1000 to 5000 files download per day) download worldwide for their customers. The process will be: there is upload screen via paperclip to the rails local public folder a hourly task to upload to web storage (S3?) update the download url from paperclip url to the web url Questions: is there a gem/plug-in for this purpose? if no, any gem/plug-in for S3 to recommend? Questions about the storage provider: is S3 recommended? or other service to recommend? The baseline is: the client's web server does not and will not have the bandwidth to handle the downloads. Thanks

    Read the article

  • MongoDB or CouchDB - fit for production?

    - by Alan
    I was wondering if anyone can tell me if MongoDB or CouchDB are ready for a production environment. I'm now looking at these storage solutions (I'm favouring MongoDB at the moment), however these projects are quite young and so I foresee that I'm going to have to work quite hard to convince my manager that we should adopt this new technology. What I'd like to know is: 1) Who is using MongoDB or CouchDB today in a production environment? 2) How are you using MongoDB/CouchDB? 3) What problems (if any) did you come across when you adopted this new storage mechanism (and how did you overcome them)? 4) How did you deal with any migration issues that you had to deal with? 5) Do you have any good/bad experiences with either of these solutions that you'd like to share? Thanks.

    Read the article

  • django-avatar: cant save thumbnail

    - by Znack
    I'm use django-avatar app and can't make it to save thumbnails. The original image save normally in my media dir. Using the step execution showed that error occurred here image.save(thumb, settings.AVATAR_THUMB_FORMAT, quality=quality) I found this line in create_thumbnail: def create_thumbnail(self, size, quality=None): # invalidate the cache of the thumbnail with the given size first invalidate_cache(self.user, size) try: orig = self.avatar.storage.open(self.avatar.name, 'rb') image = Image.open(orig) quality = quality or settings.AVATAR_THUMB_QUALITY w, h = image.size if w != size or h != size: if w > h: diff = int((w - h) / 2) image = image.crop((diff, 0, w - diff, h)) else: diff = int((h - w) / 2) image = image.crop((0, diff, w, h - diff)) if image.mode != "RGB": image = image.convert("RGB") image = image.resize((size, size), settings.AVATAR_RESIZE_METHOD) thumb = six.BytesIO() image.save(thumb, settings.AVATAR_THUMB_FORMAT, quality=quality) thumb_file = ContentFile(thumb.getvalue()) else: thumb_file = File(orig) thumb = self.avatar.storage.save(self.avatar_name(size), thumb_file) except IOError: return # What should we do here? Render a "sorry, didn't work" img? maybe all I need is just some library? Thanks

    Read the article

  • Is there any way to store full size image returned from camera activity in internal memory ?

    - by SimpleGuy
    I am using Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT,Uri.fromFile(externalFileObj)); intent to call default camera activity. To get full image you need to specify intent.putExtra(). But this always requires URI that works only for external storage files. I tried to create a temp.jpg image in internal memory and pass its URI Uri.fromFile(new File(getFilesDir() + "/temp.jpg")); but the camera activity won't return back after the image is captured. So there is no way to get Full size image from default camera application in our activity without using any external storage ? Assuming that the device do not have SD card or currently in use is there no way I can avoid using it ? Yes I know we can create our own camerapreview surface but I want to use the default camera application as it is natural with many more options. Thanks.

    Read the article

  • initialization of objects in c++

    - by Happy Mittal
    I want to know, in c++, when does the initialization of objects take place? Is it at the compile time or link time? For ex: //file1.cpp extern int i; int j=5; //file2.cpp ( link with file1.cpp) extern j; int i=10; Now, what does compiler do : according to me, it allocates storage for variables. Now I want to know : does it also put initialization value in that storage or is it done at link time?

    Read the article

  • Bit/Byte adressing - Little/Big-endnian

    - by code8230
    Consider the 16-Bit data packet below, which is sent through the network in network byte order ie Big Endian: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (Byte num) 34 67 89 45 90 AB FF 23 65 37 56 C6 56 B7 00 00 (Value) Lets say 8945 is a 16 bit value. All others are 8 bit data bytes. On my system, which is little endian, how would the data be received and stored? Lets say, we are configured to receive 8 bytes at a time. RxBuff is the Rx buffer where data will be received. Buff is the storage buffer where data would be stored. Please point out which case is correct for data storage after reading 8 bytes at a time: 1) Buff[] = {0x34, 0x67, 0x45, 0x89, 0x90, 0xAB....... 0x00}; 2) Buff[] = {0x00, 0x00, .......0x67, 0x89, 0x45, 0x34}; Would the whole 16 bytes data be reversed or only the 2 bytes value contained in this packet?

    Read the article

  • Trying to perform a series of actions on page unload, but the pages unloads too fast to finish them.

    - by user138821
    I have a series of actions I want to perform on page unload. Namely if a user is editing an input field, and they refresh or close browser or leave page, I want to save the contents of the field. The actions don't include an AJAX call, so I can't just make it synchronous. It's actually saving to local storage, but the page unloads before the storage can take place. The code is correct, if I add an alert to the actions, the delay allows the rest of the code to finish before it even displays. Any ideas? Thanks!

    Read the article

  • Database(Partitions) related query.... Please suggest..........

    - by AGeek
    The Query is as follows:- We have an oracle schema, with tables partitioned. -- Need to make one of the partition offline. (How to achieve this,, or have thought of assigning each partition, a different tablespace, and then making that tablespace offline) any other way out possible so as to avoid making so many tablespaces???? -- Then, copy the data files/ logs related to that partition to the temporary storage..... how to achieve this thing... any docs related to the same.... -- Then We make this partition / related tablespace online... -- Then we create a second schema with the same tables as above..... -- Copy the partition data from temporary storage to this table. (How this can be achieved,,, any specific doc??? -- Then verifying to the data is accessible in all respects..... If possible, let's discuss to find out the solution........ Thanks....

    Read the article

  • How to Create a Virtual Windows Drive

    - by HyLian
    Hello, I'm trying to create a Windows Virtual Drive ( like c:\ ) to map a remote storage. The main purpose is to do it in a clear way to the user. Therefore the user wouldn't know that he is writing/reading from another site. I was searching for available products, and i find that FUSE is not an option in Windows and WebDAV maps directly the drive, and i would like to build a middle layer between windows and remote storage to implement some kind of services. Another alternatives exists, such as Dokan, that is very expensive, and System.IO.IsolatedStorage Namespace, that doesn't seem to explicity create a new Windows Drive. Probably pismo ( http://www.pismotechnic.com/ ) is the thing that mostly matches my requirements but I would know if there is another alternative, including some Windows ( C++ or .NET ) native API to do that. Thanks for reading :)

    Read the article

  • Several C# Language Questions

    - by Water Cooler v2
    1) What is int? Is it any different from the struct System.Int32? I understand that the former is a C# alias (typedef or #define equivalant) for the CLR type System.Int32. Is this understanding correct? 2) When we say: IComparable x = 10; Is that like saying: IComparable x = new System.Int32(); But we can't new a struct, right? or in C like syntax: struct System.In32 *x; x=>someThing = 10; 3) What is String with a capitalized S? I see in Reflector that it is the sealed String class, which, of course, is a reference type, unlike the System.Int32 above, which is a value type. What is string, with an uncapitalized s, though? Is that also the C# alias for this class? Why can I not see the alias definitions in Reflector? 4) Try to follow me down this subtle train of thought, if you please. We know that a storage location of a particular type can only access properties and members on its interface. That means: Person p = new Customer(); p.Name = "Water Cooler v2"; // legal because as Name is defined on Person. but // illegal without an explicit cast even though the backing // store is a Customer, the storage location is of type // Person, which doesn't support the member/method being // accessed/called. p.GetTotalValueOfOrdersMade(); Now, with that inference, consider this scenario: int i = 10; // obvious System.object defines no member to // store an integer value or any other value in. // So, my question really is, when the integer is // boxed, what is the *type* it is actually boxed to. // In other words, what is the type that forms the // backing store on the heap, for this operation? object x = i; Update Thank you for your answers, Eric Gunnerson and Aaronought. I'm afraid I haven't been able to articulate my questions well enough to attract very satisfying answers. The trouble is, I do know the answers to my questions on the surface, and I am, by no means, a newbie programmer. But I have to admit, a deeper understanding to the intricacies of how a language and its underlying platform/runtime handle storage of types has eluded me for as long as I've been a programmer, even though I write correct code.

    Read the article

  • C: stdin and std* errs

    - by user355926
    I want to my manipulate Stdin, then Std* but some errs: $ gcc testFd.c testFd.c:9: error: initializer element is not constant testFd.c:9: warning: data definition has no type or storage class testFd.c:10: error: redefinition of `fd' testFd.c:9: error: `fd' previously defined here testFd.c:10: error: `mode' undeclared here (not in a function) testFd.c:10: error: initializer element is not constant testFd.c:10: warning: data definition has no type or storage class testFd.c:12: error: syntax error before string constant $ cat testFd.c #include <stdio.h> #include <sys/ioctl.h> int STDIN_FILENO = 1; // I want to access typed // Shell commands, dunno about the value: unsigned long F_DUPFD; fd = fcntl(STDIN_FILENO, F_DUPFD, 0); fd = open("/dev/fd/0", mode); printf("STDIN = %s", fd);

    Read the article

  • Have a workaround for the 1,000 file limit in a directory on windows mobile 5?

    - by nateday76
    I need to download more than 1,000 files into a windows mobile 5 directory located on the storage card. If i copy the files onto the storage card via my desktop there is no problem. But when i try to download the files from the handheld device I get a disk full error, even though there is plenty of room due to the 1,000 file limit. Has anyone run into this and found a workaround? I'm going to try zipping all of the files then decompressing on the device but not sure that this will work.

    Read the article

  • Copying to /system - android

    - by user1675783
    I am been trying to copy a apk from assets of another apk to /system. Here is what I have done,it was working in my previous app but not in this.I have added permission for wrtiting external storage. It is successfully copying to internal storage,not not to /system. Is there any way to directly copy to /system? copyStream("y.apk","/sdcard/x.apk"); Process mSuProcess; mSuProcess = Runtime.getRuntime().exec("su"); new DataOutputStream(mSuProcess.getOutputStream()).writeBytes("mount -o remount rw /system"); DataOutputStream mSuDataOutputStream = new DataOutputStream(mSuProcess.getOutputStream()); mSuDataOutputStream.writeBytes("cp /sdcard/x.apk /system/app/x.apk"); mSuDataOutputStream.writeBytes("exit\n");

    Read the article

  • ERROR: Attempted to read or write protected memory. This is often an indication that other memory is corrupt

    - by SPSamL
    I get this error after having edited a few pages in SharePoint 2010. I have to do an IISReset on both front ends to get this to resolve. I don't know how to fix it or even what else to supply here, but please let me know as the resets now happen several times per day. Log Name: Application Source: ASP.NET 2.0.50727.0 Date: 1/26/2011 11:12:48 AM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: PINTSPSFE02.samcstl.org Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 1/26/2011 11:12:48 AM Event time (UTC): 1/26/2011 5:12:48 PM Event ID: c52fb336b7f147a3913fff3617a99d57 Event sequence: 4965 Event occurrence: 2178 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1449762715/ROOT-2-129405348166941887 Trust level: WSS_Minimal Application Virtual Path: / Application Path: C:\inetpub\wwwroot\wss\VirtualDirectories\80\ Machine name: PINTSPSFE02 Process information: Process ID: 5928 Process name: w3wp.exe Account name: SAMC\MossAppPool Exception information: Exception type: AccessViolationException Exception message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Request information: Request URL: http://mosscluster/Pages/Home.aspx Request path: /Pages/Home.aspx User host address: 10.3.60.26 User: SAMC\BARNMD Is authenticated: True Authentication Type: NTLM Thread account name: SAMC\MossAppPool Thread information: Thread ID: 110 Thread account name: SAMC\MossAppPool Is impersonating: False Stack trace: at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Delete(String key, Boolean recursive, DeletionReason reason) at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Get(String key) at Microsoft.Office.Server.ObjectCache.SPCache.Get(String objectTypeName, String id) at Microsoft.Office.Server.Administration.UserProfileServiceProxy.GetPartitionPropertiesCache(Guid applicationID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_PartitionPropertiesCache() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.DataCache.get_PartitionProperties() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, Guid partitionID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, SPServiceContext serviceContext) at Microsoft.Office.Server.WebControls.MyLinksRibbon.EnsureMySiteUrls() at Microsoft.Office.Server.WebControls.MyLinksRibbon.get_PortalMySiteUrlAvailable() at Microsoft.Office.Server.WebControls.MyLinksRibbon.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Custom event details: Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="ASP.NET 2.0.50727.0" /> <EventID Qualifiers="32768">1309</EventID> <Level>3</Level> <Task>3</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-01-26T17:12:48.000000000Z" /> <EventRecordID>35834</EventRecordID> <Channel>Application</Channel> <Computer>PINTSPSFE02.samcstl.org</Computer> <Security /> </System> <EventData> <Data>3005</Data> <Data>An unhandled exception has occurred.</Data> <Data>1/26/2011 11:12:48 AM</Data> <Data>1/26/2011 5:12:48 PM</Data> <Data>c52fb336b7f147a3913fff3617a99d57</Data> <Data>4965</Data> <Data>2178</Data> <Data>0</Data> <Data>/LM/W3SVC/1449762715/ROOT-2-129405348166941887</Data> <Data>WSS_Minimal</Data> <Data>/</Data> <Data>C:\inetpub\wwwroot\wss\VirtualDirectories\80\</Data> <Data>PINTSPSFE02</Data> <Data> </Data> <Data>5928</Data> <Data>w3wp.exe</Data> <Data>SAMC\MossAppPool</Data> <Data>AccessViolationException</Data> <Data></Data> <Data>http://mosscluster/Pages/Home.aspx</Data> <Data>/Pages/Home.aspx</Data> <Data>10.3.60.26</Data> <Data>SAMC\BARNMD</Data> <Data>True</Data> <Data>NTLM</Data> <Data>SAMC\MossAppPool</Data> <Data>110</Data> <Data>SAMC\MossAppPool</Data> <Data>False</Data> <Data> at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Delete(String key, Boolean recursive, DeletionReason reason) at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Get(String key) at Microsoft.Office.Server.ObjectCache.SPCache.Get(String objectTypeName, String id) at Microsoft.Office.Server.Administration.UserProfileServiceProxy.GetPartitionPropertiesCache(Guid applicationID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_PartitionPropertiesCache() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.DataCache.get_PartitionProperties() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, Guid partitionID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, SPServiceContext serviceContext) at Microsoft.Office.Server.WebControls.MyLinksRibbon.EnsureMySiteUrls() at Microsoft.Office.Server.WebControls.MyLinksRibbon.get_PortalMySiteUrlAvailable() at Microsoft.Office.Server.WebControls.MyLinksRibbon.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) </Data> </EventData> </Event>

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • email bouncing back

    - by moiz.in
    Some emails are bouncing back with the error message below The following organization rejected your message: cluster-m.mailcontrol.com Also when I looked the further details it gives me this information: Diagnostic information for administrators: Generating server: myserver.com.au [email protected] cluster-m.mailcontrol.com #554 5.7.1 Access denied ## Received: from myserver.com.au ([192.168.0.3]) by myserver.com.au ([192.168.0.3]) with mapi; Mon, 27 Jun 2011 08:04:50 +0800 From: XYZ <[email protected]> To: "XYZ ([email protected])" <[email protected]> Date: Mon, 27 Jun 2011 08:04:49 +0800 Subject: FW: Pic S979888 Thread-Topic: Pic S979888 Thread-Index: Acw0WppDIX2PPJwZR0OGVP1rbUtzDAAAzcuA Message-ID: <[email protected]> Accept-Language: en-US, en-AU Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: acceptlanguage: en-US, en-AU Content-Type: multipart/mixed; boundary="_004_573874A6BF36864EA3FB179BF7A43C2B031D388DF7D8bunsrvapp00_" MIME-Version: 1.0 Could you please tell me what is wrong with this and why is it bouncing back?

    Read the article

  • HAProxy -- pause/queue all traffic without losing requests

    - by Marc
    I basically have the same problem as mentioned in this thread -- I would like to temporarily suspend all requests to all servers of a certain backend, so that I can upgrade the backend and the database it uses. Since this is a live system, I would like to queue up requests, and send them to the backend servers once they've been upgraded. Since I'm doing a database upgrade with the code change, I have to upgrade all backend servers simultaneously, so I can't just bring one down at a time. I tried using the tcp-request options combined with removing the static healthcheck file as mentioned in that thread, but had no luck. Setting the default "maxconn" value to 0 seems to pause and queue connections as desired, but then there seems to be no way to increase the value back to a positive number without restarting HAProxy, which kills all requests that had been queued up until that point. (The "hot-reconfiguration" options using -sf and -st start a new process, which doesn't seem to do what I want). Is what I'm trying to do possible?

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >