Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 142/2072 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • question about c++ template functions taking any type as long that type meets at least one of the re

    - by smerlin
    Since i cant explain this very well, i will start with a small example right away: template <class T> void Print(const T& t){t.print1();} template <class T> void Print(const T& t){t.print2();} This does not compile: error C2995: 'void Print(const T &)' : function template has already been defined So, how can i create a template function which takes any type T as long as that type has a print1 memberfunction OR a print2 memberfunction (no polymorphism) ?

    Read the article

  • Building SL4 + RIAServices app takes too long on VS2010.

    - by adlanelm
    Got a Win7 box with VS2010 Premium installed on it. Building desktop apps works just fine. But we got this solution with 15 SL4 and 21 desktop projects... Building the SL part of it takes too long. This is very irritating and encourages to drop TDD since every time I run a test it takes ~3 seconds for msbuild to find out that nothing changed and the project should be skipped. The projects are very small and there's nothing fancy in them and we hadn't any problems before we switched from VS2008+SL3. I've heard people complaining abound VS2010 speed in general, but nothing about SL4 build time. Is anyone experiencing same problems and is there any workaround for this?

    Read the article

  • .Net MVC - Restfull URL's - The specified path, file name, or both are too long. The fully qualified

    - by Truegilly
    Hello, im creating a MVC application thats following a restfull URL approach Im am experiencing the following error... "The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters." This error occurs when my URL length = 225 chars. Surly I can have much longer URL's without this problem and doesn’t this relate to file paths rather than URL's ? Im sure some of you MVC guys have experienced this ;) is there a way round it ?? where am i going wrong ?? thank you for your time Truegilly

    Read the article

  • Can using non primitive Integer/ Long datatypes too frequently in the application, hurt the performance??

    - by Marcos
    I am using Long/Integer data types very frequently in my application, to build Generic datatypes. I fear that using these wrapper objects instead of primitive data types may be harmful for performance since each time it needs to create objects which is an expensive operation. but also it seems that I have no other choice(when I have to use primtives with generics) rather than just using them. However, still it would be great if you can suggest if there is anything I could do to make it better. or any way if I could just avoid it ?? Also What may be the downsides of this ? Suggestions welcomed!

    Read the article

  • How to measure how long is a function running?

    - by rhose87
    I want to see how long a function is running. So I added a timer object on my form and I came out with this code: private int counter = 0; //inside button click I have: timer = new Timer(); timer.Tick += new EventHandler(timer_Tick); timer.Start(); Result result = new Result(); result = new GeneticAlgorithms().TabuSearch(parametersTabu, functia); timer.Stop(); and: private void timer_Tick(object sender, EventArgs e) { counter++; btnTabuSearch.Text = counter.ToString(); } But this is not counting anything. Any ideas?

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • What's the fastest way to search a very long list of words for a match in actionscript 3?

    - by Nuthman
    So I have a list of words (the entire English dictionary). For a word matching game, when a player moves a piece I need to check the entire dictionary to see if the the word that the player made exists in the dictionary. I need to do this as quickly as possible. simply iterating through the dictionary is way too slow. What is the quickest algorithm in AS3 to search a long list like this for a match, and what datatype should I use? (ie array, object, Dictionary etc)

    Read the article

  • .Net MVC - Restful URL's - The specified path, file name, or both are too long. The fully qualified

    - by Truegilly
    Hello, im creating a MVC application thats following a restfull URL approach Im am experiencing the following error... "The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters." This error occurs when my URL length = 225 chars. Surly I can have much longer URL's without this problem and doesn’t this relate to file paths rather than URL's ? Im sure some of you MVC guys have experienced this ;) is there a way round it ?? where am i going wrong ?? thank you for your time Truegilly

    Read the article

  • MySQL null/empty-string too long for VarChar(5) ?

    - by rlb.usa
    I have a stored procedure that takes an input var_zip varchar(5). My webapp calls this procedure and gives it the parameter: cmd.Parameters.Add(new MySqlParameter("var_Zip", MySqlDbType.VarChar, 5)); cmd.Parameters["var_Zip"].Value = txtZip.Text; txtZip is empty. So it is either giving empty string '', or null. Doesn't matter to me which. But it is throwing an error: MySql.Data.MySqlClient.MySqlException: Data too long for column 'var_Zip' at row 1 What's going on and how can I fix it? I don't understand.

    Read the article

  • Correct way to textually report the remaining time on a long running process?

    - by Ryan
    So you have a long running process, perhaps with a progress bar, and you want a text estimate of the remaining time, eg: "5 minutes remaining" "30 seconds remaining" etc. If you don't actually want to report clock time (due to accuracy or resolution or update-rate issues) but want to stick to the text summary, what is the correct paradigm? Is "one minute" left displayed from 0 to 60 seconds? or from 1:00 to 1:59? Say there's 1:35 Left - is that "2 minutes remaining" or "1 minute remaining"? Do you just pare it down to "A few minutes left" when you're less than 3 minutes? What is the preferred (least user-frustrating) method?

    Read the article

  • In C# is there a thread scheduler for long running threads?

    - by LogicMagic
    Hi, Our scenario is a network scanner. It connects to a set of hosts and scans them in parallel for a while using low priority background threads. I want to be able to schedule lots of work but only have any given say ten or whatever number of hosts scanned in parallel. Even if I create my own threads, the many callbacks and other asynchronous goodness uses the ThreadPool and I end up running out of resources. I should look at MonoTorrent... If I use THE ThreadPool, can I limit my application to some number that will leave enough for the rest of the application to Run smoothly? Is there a threadpool that I can initialize to n long lived threads?

    Read the article

  • Why so Long time span in creating Session Factory?

    - by vijay.shad
    Hi My project is web application running in the tomcat container. This application is a spring framework based hibernate application. The problem with this is it takes a lot of time when creates session factory. here is the logs 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] Session factory constructed with filter configurations : {} 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] instantiating session factory with properties: {java.vendor=Sun Microsystems Inc., sun.java.launcher=SUN_STANDARD, catalina.base=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.management.compiler=HotSpot Tiered Compilers, catalina.useNaming=true, os.name=Linux, sun.boot.class.path=/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.jar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes, java.util.logging.config.file=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/conf/logging.properties, java.vm.specification.vendor=Sun Microsystems Inc., hibernate.generate_statistics=true, java.runtime.version=1.6.0_17-b04, hibernate.cache.provider_class=org.hibernate.cache.EhCacheProvider, user.name=root, shared.loader=, tomcat.util.buf.StringCache.byte.enabled=true, hibernate.connection.release_mode=auto, user.language=en, java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory, sun.boot.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386, java.version=1.6.0_17, java.util.logging.manager=org.apache.juli.ClassLoaderLogManager, user.timezone=Canada/Pacific, sun.arch.data.model=32, java.endorsed.dirs=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/endorsed, sun.cpu.isalist=, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans., file.separator=/, java.specification.name=Java Platform API Specification, java.class.version=50.0, user.country=US, java.home=/usr/java/jdk1.6.0_17/jre, java.vm.info=mixed mode, os.version=2.6.18-128.el5, path.separator=:, java.vm.version=14.3-b01, hibernate.jdbc.batch_size=25, java.awt.printerjob=sun.print.PSPrinterJob, sun.io.unicode.encoding=UnicodeLittle, package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper., java.naming.factory.url.pkgs=org.apache.naming, sun.rmi.dgc.client.gcInterval=3600000, user.home=/root, java.specification.vendor=Sun Microsystems Inc., java.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386/server:/usr/java/jdk1.6.0_17/jre/lib/i386:/usr/java/jdk1.6.0_17/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib, java.vendor.url=http://java.sun.com/, java.vm.vendor=Sun Microsystems Inc., hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect, sun.rmi.dgc.server.gcInterval=3600000, common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar, java.runtime.name=Java(TM) SE Runtime Environment, java.class.path=:/usr/local/InstalledPrograms/apache-tomcat-6.0.20/bin/bootstrap.jar, hibernate.bytecode.use_reflection_optimizer=false, java.vm.specification.name=Java Virtual Machine Specification, java.vm.specification.version=1.0, catalina.home=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.cpu.endian=little, sun.os.patch.level=unknown, hibernate.cache.use_query_cache=true, hibernate.connection.provider_class=org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider, java.io.tmpdir=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/temp, java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi, server.loader=, os.arch=i386, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, java.ext.dirs=/usr/java/jdk1.6.0_17/jre/lib/ext:/usr/java/packages/lib/ext, user.dir=/, line.separator=, java.vm.name=Java HotSpot(TM) Server VM, hibernate.cache.use_second_level_cache=true, file.encoding=UTF-8, java.specification.version=1.6, hibernate.show_sql=true} 2010-04-15 23:08:53,516 DEBUG [AbstractEntityPersister] Static SQL for entity: com.vsd.model.Order There you can see the time delay of more than 3 mins in executing these processes. My database is mysql and database server is running on the local machine only. The container environment is Centos Linux system. I am clueless about why it takes that much of time in executing these process, But when i do the same task from under eclipse it does not take that much of time. Development environment is Windows.

    Read the article

  • How to convert binary read/write to non-binary read/write in C++

    - by Phenom
    I have some C++ code from somewhere that reads and writes data in binary format. I want to see what it's reading and writing in the file, so I want to convert it's binary read and write to non-binary read and write. Also, when I convert the binary write to non-binary write, I want it to still be able to read in the information correctly. How can this be done? The write function: int btwrite(short rrn, BTPAGE *page_ptr) { // long lseek(), addr; long addr; addr = (long) rrn * (long) PAGESIZE + HEADERSIZE; lseek(btfd, addr, 0); return (write(btfd, page_ptr, PAGESIZE)); } The read function: int btread(short rrn, BTPAGE *page_ptr) { // long lseek(), addr; long addr; addr = (long)rrn * (long)PAGESIZE + HEADERSIZE; lseek(btfd, addr, 0); return ( read(btfd, page_ptr, PAGESIZE) ); }

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • How to use connectify to share files between 2 laptops, one running Windows 7?

    - by p2pnode
    I have 2 laptops one running XP and another Windows 7. Both have WiFi-card installed. I want to be able to share files between these 2 machines and then in next step also be able to share internet connection. I have heard that Connectify on Windows 7 is a very handy and easy tool to set up a hotspot and other machines even XP,Vista can connect to this host hotspot and share files and internet connection. I am able to search for the network (set on host) on my client machine. But how to share files? I don't see any such menu or anything. Also after I have installed connectify on Windows 7, I am not able to connect to internet using data card. It throws error that "Error 31:A A device attached to the system is not functioning". And on client machine, if I connect to data card, as well as connect to wireless network setup by host machine, I no longer am able to access internet, even though the data card is connected. The browser throws error: Please help. Any other utility similar to connectify etc?

    Read the article

  • Tomato OS: "memory exhausted" running vi .... how to solve?

    - by Sam Jones
    I have set up tomato (shibby) on an asus RT-N66U router. It works great. I loaded up a few pieces, like transmission and optware. I can run vi, but when I run vi it fails with a "memory exhausted" error, and the terminal session hangs. For reference: If I simply start "vi" it runs fine. But if I specify vi I get the memory exhausted error, even if the file I am opening is just a couple of hundred bytes in size (like fstab). I discovered that my swap partition was not properly set up, so I did that. The swapon command now indicates I really do have a swap: [root@MyRouter samba]$ swapon -s Filename Type Size Used Priority /dev/sda1 partition 32900860 0 1 How can I get vi to work? Thanks! System setup reference information: asus RT-N66U router 2TB usb hard drive partitions on hard drive: Disk /dev/sda: 2000.4 GB, 2000398839808 bytes 255 heads, 63 sectors/track, 30400 cylinders Units = cylinders of 16065 * 4096 = 65802240 bytes Disk identifier: 0xfacbc8ab Device Boot Start End Blocks Id System /dev/sda1 1 512 32900868 82 Linux swap / Solaris /dev/sda2 513 29000 1830638880 83 Linux running samba memory: $ cat /proc/meminfo MemTotal: 255840 kB MemFree: 210980 kB Buffers: 5264 kB Cached: 22768 kB SwapCached: 0 kB Active: 20272 kB Inactive: 11448 kB HighTotal: 131072 kB HighFree: 99868 kB LowTotal: 124768 kB LowFree: 111112 kB SwapTotal: 32900860 kB SwapFree: 32900860 kB Dirty: 0 kB Writeback: 0 kB TIA!

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • How to run VisualSvn Server on port 443 running IIS on same server?

    - by Metro Smurf
    Server 2008 R2 SP1 VisualSvn Server 2.1.6 The IIS server has about 10 sites. One of them uses https over port 443 with the following bindings: http x.x.x.39:80 site.com http x.x.x.39:80 www.site.com https x.x.x.39:443 VisualSvn Server Properties server name: svn.SomeSite.com server port: 443 Server Binding: x.x.x.40 No sites on IIS are listening to x.x.x.40. When starting up VisualSvn server, the following errors are thrown: make_sock: could not bind to address x.x.x.40:443 (OS 10013) An attempt was made to access a socket in a way forbidden by its access permissions. no listening sockets available, shutting down When I stop Site.com on IIS, then VisualSvn Server starts up without a problem. When I bind VisualSvn server to port 8443 and start Site.com, then VisualSvn Server starts without a problem. My goal is to be able to access the VisualSvn Server with a normal url, i.e., one that does't use a port number in the address: https://svn.site.com vs https://svn.site.com:8443 What needs to be configured to allow VisualSvn Server to run on port 443 with IIS running on the same server?

    Read the article

  • How to migrate a running KVM (with full disk copy) to another node?

    - by klipz
    I'm doing tests on KVM, and I'd like to see if I can make a hot migration, I mean the virtual machine won't stop running during the migration (but a few seconds of freeze is ok). I use a small cluster for my test : kvm1, kvm2, and kvmnfs. kvm1 and kvm2 runs the virtual machines kvmnfs is a NFS server, and it's mounted on /KVM on both kvm1 and kvm2 To migrate a VM (only RAM in fact) from kvm1 to kvm2, I run the same kvm command on kvm2 (with -incoming tcp:0:4444) that on kvm1, then I use "migrate -d tcp:kvm2:4444" : It works great, since the VM file is common to both machines. Now, I wan't to make a full migration (RAM + disk) of a local VM file (no more NFS) of kvm1 to kvm2. I tried to create an empty file, with touch, on kvm2 and use the same kvm command line + the "-incoming ..."). Then on kvm1 I use "migrate -d tcp:kvm2:4444" : It copies everything, then... the VM fails (any I/O disk gives an I/O error) ! And my VM file on kvm2, the one I created with touch, as still a size of 0 bytes. What am I doing wrong ? What is the exact command to use on kvm2 ? And what is the command to launch, in the monitoring mode, on kvm1 ?

    Read the article

  • Would Apache running on port 8080 prevent dynamically loaded scripts in JavaScript?

    - by editor
    Had a nice PHP/HTML/JS prototype working on my personal Linode, then tried to throw it into a work machine. The page adds a script tag dynamically with some JavaScript. It's a bunch of Google charts that update based on different timeslices. That code looks something like this: // jQuery $.post to send the beginning and end timestamps $.post("channel_functions.php", data_to_post, function(data){ // the data that's returned is the javascript I want to load var script = document.createElement('script'); var head= document.getElementsByTagName('head')[0]; var script= document.createElement('script'); var text = document.createTextNode(data); script.type= 'text/javascript'; script.id = 'chart_data'; script.appendChild(text); // Adding script tag to page head.appendChild(script); // Call the function I know were present in the script tag loadTheCharts(); }); function loadTheCharts() { // These are the functions that were loaded dynamically // By this point the script tag is supposed be loaded, added and eval'd function1(); function2(); } Function1() and function2() don't exist until they get added to the dom, but I don't call loadTheCharts() until after the $.post has run so this doesn't seem to be a problem. I'm one of those dirty PHP coders you mother warned you about, so I'm not well versed in JavaScript beyond what I've read in the typical go-to O'Reilly books. But this code worked fine on my personal dev server, so I'm wondering why it wouldn't work on this new machine. The only difference in setup, from what I can tell, is that the new machine is running on port 8080, so it's 192.168.blah.blah:8080/index.php instead of nicedomain.com/index.php. I see the code was indeed added to the dom when I use webmaster tools to "view generated source" but in Firebug I get an error like "function2() is undefined" even though my understanding was that all script tags are eval'ed when added to . My question: Given what I've laid out, and that the machine is running on :8080, is there a reason anyone can think of as to why a dynamically loaded function like function2() would be defined on the Linode and not on the machine running Apache on 8080?

    Read the article

  • How to check for an existing executable before running it in a post-build event in VS2008?

    - by wtaniguchi
    Hey all, I'm trying to use SubWCRev to get the current revision number of our SVN repository and put it in a file so I can show it in the UI. As I'm working with a Web App, I use the following post build command line: "SubWCRev.exe" "$(SolutionDir)." "$(ProjectDir)Content\js\revnumber.js.tpl" "$(ProjectDir)Content\js\revnumber.js" It works great, but now I want to make sure I have SubWCRev before running it, so I can skip this post build if a fellow developer is not running TortoiseSVN. I tried a few batch codes here, but couldn't figure this out. Any ideas? Thanks!

    Read the article

  • Unable to allocate new pages in table space "XXXX" ... but it's 250 megs and I'm only running DDL

    - by Sylvia
    Hello, I'm a DB2 newbie, so I'd appreciate even any pointers on where to start looking. We have great DB2 admins but they're swamped with other issues now, so I'm trying to do some troubleshooting on a development database. My situation is that I have a tablespace that's giving me this error message Unable to allocate new pages in table space "[MyTableSpace]". However, all I'm doing is running multiple (hundreds) of DDL statements, mainly creating tables but also indexes and pk scripts. So, considering that the tablespace has about 250 mg, I shouldn't be running out of space, right? Here's another thing - it appears that after I leave my script for a while, something "resets" and works for a while, then I begin to have the tablespace issue again. thanks, Sylvia

    Read the article

  • VMWare Fusion Folder Sharing Not Working with Server

    - by Dave Long
    I have a Ubuntu Server running in VMWare Fusion 3.1.2 on my MacBook Pro for Java development and all my projects sit on my Mac in ~/Workspace/ColdFusion. I had ColdFusion/ shared with my VM through the VMWare tools, and it was working perfectly up until friday when the folder sharing just stopped. No updates on either mac or linux besides an iTunes update. I tried uninstalling the VMWare tools and reinstalling them but I get an error at the end of the install. It appears that when I reinstall the tools there are files left over from the old installation. Is there a way to force the unsinstall script to completely uninstall and remove all files for the VMWare-Tools? I know the shared folder used to mount at /mnt/hgfs/ColdFusion.

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • Import emails from hard disk image?

    - by Chen Xiao-Long
    My old Pentium 3 email server just died on me. Is it possible import all my emails that I had? I was running postfix and the cyrus IMAP server. I can chroot to the hard drive to run any commands if needed. After grep'ing the hard drive, I found that all of my emails are in /var/spool/imap. I assume that I can't just copy all the emails to my new server, so what do I need to do to get them onto my new server?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >