Search Results

Search found 14013 results on 561 pages for 'remote backup'.

Page 201/561 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Global.asax parser errors when deploying MVC 1 application to remote server.

    - by mannish
    So we're having some issues deploying an ASP.NET MVC app to a client site. Basically when we try to test the app from localhost, we get the dreaded Global.asax parser error indicating it could not load the application global. Research indicates there are basically 4 possible reasons for this exception we're seeing: The solution hasn't been built. This clearly isn't the case since we can deploy it here and it runs fine on any machine we deploy to AND we had to build and publish the darn thing to deploy it anyway. The Global.asax namespace inheritance does not match the application global code file. Again we double checked this and since it runs just fine here that can't be the issue. Miscellaneous non-descript IIS/VS.NET mischief. Basically something get's wonky in IIS or VS.NET and the web server won't behave correctly for this application. We've done cleans and rebuilds, we've deleted virtual dir and recreated, and performed all of the IIS munging that we've found elsewhere online. Various combinations of IIS bounces, server reboots, virtual dir/application recreation, etc. Code level permissions issue. We've verified full trust in machine/web config in the framework directory, we've set .NET trust to full in IIS, we've granted Everyone full control on the directories just to hit it with the security hammer, etc. etc. The pertinent detials: Windows Server 2008 x64 IIS 7, 32 bit compatible app pool (app was written on 32 bit OS compiled for any cpu) App pool identity set to NetworkService Microsoft ASP.NET MVC 1.0 XCopy deployment We deployed another read-only app just fine. The significant difference in this app is the use of NHibernate and Log4Net which require full trust. Additionally, the actual project name of the web project differs from the default namespace however the Inherits namespace in Global.asax and the Global.asax.cs files match so this shouldn't be an issue. Anybody have any bright ideas? We're officially down to just the dim ones.

    Read the article

  • How can I know the IP address of a remote host by using UDP broadcast message?

    - by khera-satinder
    Hi All, I am developing an embedded system and very new to this TCP\IP. My problem is that once I installed my board in a local network and this board will acquire its IP address dynamically, it has to communicate with a client application running on one of the PC(other than DHCP server) in the network. To communicate with this new board the client application is required to know the IP address of the board. What is the way to know the IP address of the board? Will UDP broadcast work for this purpose? If yes please, explain in detail as I am unable to understand it. Please provide me some sample code in C if possible. Regards, Satinder

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • How can I check whether a volume is mounted where it is supposed to be using Python?

    - by Ben Hymers
    I've got a backup script written in Python which creates the destination directory before copying the source directory to it. I've configured it to use /external-backup as the destination, which is where I mount an external hard drive. I just ran the script without the hard drive being turned on (or being mounted) and found that it was working as normal, albeit making a backup on the internal hard drive, which has nowhere near enough space to back itself up. My question is: how can I check whether the volume is mounted in the right place before writing to it? If I can detect that /external-backup isn't mounted, I can prevent writing to it. The bonus question is why was this allowed, when the OS knows that directory is supposed to live on another device, and what would happen to the data (on the internal hard drive) should I later mount that device (the external hard drive)? Clearly there can't be two copies on different devices at the same path! Thanks in advance!

    Read the article

  • How could I send live video stream to remote server from my phone !!!

    - by poc
    Hello , I have a problem about streaming my video to server in real-time from my phone. that is , let my phone be a IP Camera , and server can watch the live video from my phone I have googled many many solutions, but there is no one can solve my problem. I use MediaRecorder to record . it can save video file in the SD card correctly. then , I refered this page and used some method as followings skt = new Socket(InetAddress.getByName(hostname),port); pfd =ParcelFileDescriptor.fromSocket(skt); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); now it seems I can send the video stream while recording however, I wrote a receiver-side program to receive the video stream from Android , but it doesn't work . is there any error? I can receive file , but I can not open the video file . I guess the problem may caused by file format ? there are outline of my code. in android side Socket skt = new Socket(hostIP,port); ParcelFileDescriptor pfd =ParcelFileDescriptor.fromSocket(skt); .... .... mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); ..... mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT); mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); ..... mediaRecorder.start(); in receiver side (my ACER notebook) // anyway , I don't think the file extentions will do any effect File video = new File (strDate+".3gpp"); FileOutputStream fos; try { fos = new FileOutputStream(video); byte[] data = new byte[1024]; int count =-1; while( (count = fin.read(data,0,1024) ) !=-1) { fos.write(data,0,count); fos.flush(); } fos.close(); fin.close(); I confused a long time.... thanks in advance

    Read the article

  • Local Data Cache - How do I refresh the local db when I add fields to remote db?

    - by Chu
    I'm using a Local Data Cache in an ASP.NET 3.5 environment. I made a change in my main database by adding a new field. I double click on my .SYNC file in my project to startup the Local Data Cache wizard again. The wizard starts and I click OK with the hopes that it'll re-query my database and add the new field to the local database file. Instead, I get an error saying "Synchronizing the databae failed with the message: Unable to enumerate changes at the DbServerSyncProvider..." The only way I know to get things working again is to delete the .SYNC file along with the local database and start it from scratch. There's got to be an easier way... anyone know it?

    Read the article

  • iPhone: cached profile images from the web - how to find out that remote image changed?

    - by Stefan Klumpp
    I'm loading profile pictures from Facebook, cache them on disk and load them into cells of a UITableView. Now I'm wondering, how I can find out when someone has changed his/her profile picture on Facebook that I have to load the new image from the web instead of using the one cached disk. The url of the image is always the same. Is there a lightweight way of doing this without downloading the image and comparing it to the local file?

    Read the article

  • Push TFS 2008 code to remote VSS over VPN?

    - by drovani
    We have a local Team Foundation Server 2008 that we keep our code under version control. However, we also have a paranoid client that has their own Visual Source Safe installation that wants us to keep a running copy of the code on their server as well. As such, I'm hoping there is a way I can just do a nightly push from our TFS repository to their VSS repository. I'm not concerned about keeping each changeset on TFS as a different changeset on the VSS, just a once-nightly push that creates a new changeset on the VSS and uploads the latest changeset from TFS. I guess the first part is if it is even possible for TFS to push an update to VSS. I've noticed that most replies to this question have been something to the tune of "don't do it", but I can't find anything that specifically states that it cannot be done. The second part would then be automating the process by having the TFS server connect to the client's VPN, then push the code changes. I have full control over the TFS server and I can customize the VSS install, if there are settings that need changing, but I'm limited on what I can do about settings on either firewall or server specific settings on the client's VSS server.

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • Programmatically Download Image to Desktop from Remote App with Ruby?

    - by viatropos
    I was thinking about making a little crop/resize batch processor online, and wanted to know if there was a way for me to do the following: upload image and specify dimensions click "process" and app resizes image image downloads automatically to wherever it was I uploaded it (say from my desktop), but with a new name (based on the time for example). This would make it so I could host a free image processor that never stored any data other than tempfiles. Is that possible? Something like Rails' send_file method, but I'm using Sinatra and am looking for something in pure ruby.

    Read the article

  • Time machine link error

    - by robinjam
    When attempting to perform a Time Machine backup, the backup appears to proceed as normal, until it finishes copying files at which point it complains that an "error occurred while linking files for" one of my external hard disks. During the previous backup that particular disk was in fact empty, and therefore I can't understand why Time Machine is attempting to link back to it. But alas. I've verified all my disks using Disk Utility and they all appear to be fine. Does anybody know what causes this error, and how I might go about fixing it? Failing that is there a way to force Time Machine to create a brand new backup rather than an incremental one? Thanks in advance!

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • Get Windows Last Reboot Timestamp?

    - by David.Chu.ca
    I have a PC on remote connected by network, but it occasionally crashes or is restarted by remote users. After the restart, some services and applications have to be in running status. So I would like to find out the reboot as soon as possible. I think PS may be a good choice with some scripts so that I could make remote call to get the last reboot timestamp information. Is there any way to get a remote Windows XP last reboot timestamp by using PowerShell 2.0?

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • Queue remote calls to a Python Twisted perspective broker?

    - by agartland
    The strength of Twisted (for python) is its asynchronous framework (I think). I've written an image processing server that takes requests via Perspective Broker. It works great as long as I feed it less than a couple hundred images at a time. However, sometimes it gets spiked with hundreds of images at virtually the same time. Because it tries to process them all concurrently the server crashes. As a solution I'd like to queue up the remote_calls on the server so that it only processes ~100 images at a time. It seems like this might be something that Twisted already does, but I can't seem to find it. Any ideas on how to start implementing this? A point in the right direction? Thanks!

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • How does Maven find a Artifact in a remote repository?

    - by Thomas
    I'm trying to create a maven plugin to generate a file with the URL to all the dependencies in a project. I have been able to get the dependencies and their artifact, but I'm having trouble getting the download URL. Using ArtifactResolver and ArticateMetadataSource I get some of the artifact information. However I fail to get all the information for all the dependencies. I haven't been able to find documentation on the resolution logic, so that I can call it form my plugin. I can use an ArtifactResolver to download the artifact, but what I really wanted was just the URL. The Maven Artifact API has a a method called getDownloadURL (see http://maven.apache.org/ref/2.0.4/maven-artifact/apidocs/org/apache/maven/artifact/Artifact.html). However I cant seem to find a way to get a real value into it. I always get a null value. Is there a way to have it resolved (downloading or not) and get the URL for where the file came from?

    Read the article

  • Using robocopy and excluding multiple directories

    - by GorrillaMcD
    I'm trying to copy some directories from a server before I restore from backup (my latest backup was corrupt, so I have to use an older one :( ). I'm in the Windows Recovery Environment and have access to the server's file system G:\ and my backup media C:\. But, since I'm more familiar with Linux, I'm having a bit of trouble with the command line in Windows, specifically robocopy. I want to copy multiple directories (maintaining the same directory structure) from G:\ to C:\ while excluding others (namely, the Windows and Program Files folders). I can't figure out the syntax for the /XD option. I was hoping to do something like: robocopy G: C:\backup /CREATE /XD "dir1","dir2", ...

    Read the article

  • Forcing the browser to pop a save as dialog box from a link pointing to remote url

    - by user360788
    Hi, I am building a web app that lets the user directly download files on a cdn by clicking a link. The link should point to the cdn url directly in order to minimize the load on our servers. We would like the to have the browser pop up the save as dialog box when the user clicks the link to download the file and not have the browser display the content of the file at all. So the page should not reload. However, we don't have access to setting the HTTP headers sent back from cdn. Is it possible to still pop up the save as dialog box for download using client-side code?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >