Search Results

Search found 18249 results on 730 pages for 'real world haskell'.

Page 556/730 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Parsing text files

    - by d03boy
    I encountered a situation tonight where I wanted to parse a text file. I had a very, very long word list that contained English words delimited by lines. I wanted to get rid of every word (or line) that was longer than 7 characters. This would be simple in Linux but I can't seem to find a simple solution in WindowsXP. I tried using Notepad++ regular expression search but that was a huge failure. I tried using the expression .{6,} without finding any matches. I'm really at a loss because I thought this sort of thing would be extremely easy and there would be tons of tools to accomplish a task like this. It seems like Notepad++ supports every other feature in the world except the very basic ones that seem the most obvious. Another one of my goals was to put some code before and after the word on each line. aardvark apple azolio would turn into INSERT INTO Words (word) VALUES ('aardvark'); INSERT INTO Words (word) VALUES ('apple'); INSERT INTO Words (word) VALUES ('azolio'); What suggestions/tools/tips do you have to accomplish tasks similar to this in WindowsXP?

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • How to copy mailboxes from Exchange 2003 to Exchange 2007 across forests?

    - by Tor Ivar Larsen
    Hi. Were going through a quite difficult conversion from an old ASP-solution to an entirely new one. This includes moving mailboxes from Ex2003 to Ex2007. We want to do this without deleting the old mailboxes on the Ex2003 server, to have a "fall back" in case somehing goes wrong. I have investigated the "Move-Mailbox" cmdlet in the Ex2007 shell, and it seems to fit our needs quite well. The only problem being that we want to keep the old mailboxes. This could easily be accomplished with the -SourceMailboxCleanupOptions, but we can't use this when we have used the -AllowMerge switch. The reason we need -AllowMerge is because all the user accounts with connected mailboxes are already created on Ex2007(Some automatic user creation tool, no real relevance to the case in question) The twist is that the exchange servers are in two different forests... Windows 2003 SP1 on DC1, Windows 2003 SP2 on DC2 in forest 1. Windows 2003 R2 SP2 on DC1 in forest 2. Can we use the Move-Mailbox safely for this purpose? And if yes, how?

    Read the article

  • How do I calculate clock speed in multi-core processors?

    - by NReilingh
    Is it correct to say, for example, that a processor with four cores each running at 3GHz is in fact a processor running at 12GHz? I once got into a "Mac vs. PC" argument (which by the way is NOT the focus of this topic... that was back in middle school) with an acquaintance who insisted that Macs were only being advertised as 1Ghz machines because they were dual-processor G4s each running at 500MHz. At the time I knew this to be hogwash for reasons I think are apparent to most people, but I just saw a comment on this website to the effect of "6 cores x 0.2GHz = 1.2Ghz" and that got me thinking again about whether there's a real answer to this. So, this is a more-or-less philosophical/deep technical question about the semantics of clock speed calculation. I see two possibilities: Each core is in fact doing x calculations per second, thus the total number of calculations is x(cores). Clock speed is rather a count of the number of cycles the processor goes through in the space of a second, so as long as all cores are running at the same speed, the speed of each clock cycle stays the same no matter how many cores exist. In other words, Hz = (core1Hz+core2Hz+...)/cores.

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Building a PC, advice on SSD/Hybrid Hard Drives

    - by Jamie Hartnoll
    I am looking at building a new PC, it's mainly for office (graphics heavy) use and programming. Looking for good performance with opening and closing programs and files as well as a fast boot. I plan to have 3 primary hard drives Windows 7 Programs (photoshop etc) Current Files (There'll also be a large storage capacity back up drive, but this will be the Seagate drive I already have.) So, my question is, looking at standard "old fashioned" hard drives and SSD drives, obviously there's a massive price difference. I have been looking at drives like this: http://www.ebuyer.com/268693-corsair-120gb-force-3-ssd-cssd-f120gb3-bk-cssd-f120gb3-bk and this: http://www.ebuyer.com/321969-momentus-xt-750gb-sata-2-5in-7200rpm-hybrid-8gb-ssd-in-st750lx003 Having no experience of using either I don't know what's the most efficient thing to go for. Clearly the SSD will have better performance, but: If, for example, I had an SSD for Windows (say about 100gB), that would clearly give me the boot speed I want, then I guess my real questions are: If I were to buy one more SSD, would it give the greatest improvement on standard performance if used to store programs, or currently used files? Given that the OS is on an SSD, should I not bother with the 3 drives and instead, partition that Hybrid drive to store programs and currently used files on it? Obviously, option two is cheaper and option one could cause me storage issues, but that's when I can dump files I am not currently using onto another drive. Any, I am open to suggestions... so what do you suggest?!

    Read the article

  • Active Directory Child Domain Replication Problems

    - by MikeR
    Hi, I've recently inherited an Active Directory (all DCs Windows 2003) which has been configured with several child domains that are used as test environments for out CRM software. Two of these child domains have been used for testing using dates in the future (2015), throwing them well outside of the Kerberos tolerance for time, and they're flooding my event logs with replication errors such as the following: Description: The attempt to establish a replication link for the following writable directory partition failed. Directory partition: CN=Schema,CN=Configuration,DC=ad,DC=xxxxxxx,DC=com Source domain controller: CN=NTDS Settings,CN=TESTDC001,CN=Servers,CN=SiteName,CN=Sites,CN=Configuration,DC=ad,DC=xxxxxxx,DC=com Source domain controller address: 38e95b2a-35af-4174-84ba-9ab039528cce._msdcs.ad.xxxxxxx.com Intersite transport (if any): This domain controller will be unable to replicate with the source domain controller until this problem is corrected. User Action Verify if the source domain controller is accessible or network connectivity is available. Additional Data Error value: 5 Access is denied. I'd also like to upgrade to Windows 2008 at some point, but wouldn't want to attempt any schema updates while I'm not 100% confident on the replication. I'm guessing my only real solution will be to get rid of these child domains. The child domains are operating as stand alone domains, the DC is up and running and authenticating test users fine. I'm guessing the best solution to this would be to delete the domains (although I'd be happily told otherwise). The clock forwarding appears to have been happening for several years, so I'm assuming I can't just put the clock right (I'm guessing scope for this would be 180days, the same as the tombstone lifetime) With the replication errors would I be able to dcpromo the child domains DC, select it as the last domain controller in the domain and the child domain would be deleted? Or would I be better off treating the domain as an orphaned domain and use Microsoft's instructions to clear up as such. Any advice would be much appreciated.

    Read the article

  • Setting kernel memory for installing postgresql

    - by Matthieu Taymans
    My question is about setting the kernel shared memory for installing postgresql on mac osx 10.6.8. In the readme file of postgresql it is said: Shared Memory PostgreSQL uses shared memory extensively for caching and inter-process communication. Unfortunately, the default configuration of Mac OS X does not allow suitable amounts of shared memory to be created to run the database server. Before running the installation, please ensure that your system is configured to allow the use of larger amounts of shared memory. Note that this does not 'reserve' any memory so it is safe to configure much higher values than you might initially need. You can do this by editting the file /etc/sysctl.conf - e.g. % sudo vi /etc/sysctl.conf On a MacBook Pro with 2GB of RAM, the author's sysctl.conf contains: kern.sysv.shmmax=1610612736 kern.sysv.shmall=393216 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.maxprocperuid=512 kern.maxproc=2048 Note that (kern.sysv.shmall * 4096) should be greater than or equal to kern.sysv.shmmax. kern.sysv.shmmax must also be a multiple of 4096. Once you have edited (or created) the file, reboot before continuing with the installation. If you wish to check the settings currently being used by the kernel, you can use the sysctl utility: % sysctl -a The database server can now be installed. I'm a real beginner with all this but need to instal postgresql for academic purposes do you know how i can set this kernel shared memory. Won't that be harmful for my system? Thank you in advance. Matthieu

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • How (in)secure are cell phones in reality?

    - by Aron Rotteveel
    I was recently re-reading an old Wired article about the Kaminsky DNS Vulnerability and the story behind it. In this article there was a quote that came across a little bit exaggerated to me: "The first thing I want to say to you," Vixie told Kaminsky, trying to contain the flood of feeling, "is never, ever repeat what you just told me over a cell phone." Vixie knew how easy it was to eavesdrop on a cell signal, and he had heard enough to know that he was facing a problem of global significance. If the information were intercepted by the wrong people, the wired world could be held ransom. Hackers could wreak havoc. Billions of dollars were at stake, and Vixie wasn't going to take any risks. When reading this I could not help but feel like it was a bit blown-up and theatrical. Now, I know absolutely nothing about cell phones and the security problems involved, but to my understanding, cell phone security has quite improved over the past few years. So my question is: how insecure are cell phones in reality? Are there any good articles that dig a bit deeper into this matter?

    Read the article

  • My VPS host (rosehosting) sold me a domain name, but I can't get it to work

    - by Faisal Vali
    My VPS host (rosehosting) sold me a domain name, but I can’t get it to work. They sent me an email with the following (almost a month ago) DNS Servers (unless you ordered your own DNS servers): ns1.rosehosting.com (216.114.78.148) ns2.rosehosting.com (216.114.78.155) Operating System: Ubuntu 9.04 Domain Name: mytestdomainfv.com Host Name: mytestdomainfv.com IP Address: .... Physical Host Name: Vs####.rosehosting.com When I type in the Physical Host Name or the IP from a remote computer I get connected to my VPS. But when i type 'mytestdomainfv.com' the name is never resolved, and it has been a month now. I thought that they would configure it so that it would, but it seems that they haven't. Does anyone know how I can get 'mytestdomainfv.com' to point to my VPS? I looked at some of the other similar questions, but they talk about forwarding GoDaddy domain names - so I'm not sure if it applies - but then again, it might just be my naivete. Any direction will be greatly appreciated. Thanks! p.s mytestdomainfv.com is not the real domain name

    Read the article

  • pdo_mysql installation

    - by Arsenal
    Hello, On my server I'm trying to instal PHP Projekt (6). This requires pdo_mysql however. I thought this installation of pdo_mysql would be rather straightforward... I tried using pecl (pecl install pdo_mysql) after installing devel etc, but this came up with a permission denied error. I solved this by using directories that were accessible. It then came up with a cant run C compiled programs however. It also says to check config.log for more details but ironically config.log is automatically removed if the installation process fails... When I try to compile and output a "hello world".c however, it works perfectly. I then tried to download the pdo_mysql stuff and install in myself (using configure and make install). This seemed to do the job, but when I restarted my apache ... no sign of pdo_mysql anywhere even though I adjusted my php.ini file I have read somewhere that you need to recompile PHP with the option pdo_mysql enabled. But how does one do that (I'm using CentOS4). And isn't there any other way than that??? Thanks!

    Read the article

  • Change Windows Authentication user for Sql Server Management Studio

    - by Asmor
    We're using Sql Server 2005 with Windows Authentication setup. So normally, when you log in using e.g. Sql Server Management Studio, it forces you to log in at MACHINE_NAME\Username. Anyways, on this one particular computer, the person said they had to make a new account called User01 to do something and showed me where she'd created it under security in the "master" system database. And so now when she logs in, it's listed as MACHINE_NAME\User01 (not the actual Windows user name). It's still set to Windows Authentication, though, and I'm unable to change the login name. Now here's where the real problem comes in... I didn't realize that she was being logged in under this user name at the time, and I disabled it to see what would happen. Now I can't log into the server under her account. I created a new account in Windows called test, and as expected SSMS had the username as MACHINE_NAME\test, and I was able to log in fine. However, the area where the User01 account was listed is not visible to me as far as I can tell and so I can't reenable it. I also tried running the following query: alter login User01 ENABLE And got this error: Msg 15151, Level 16, State 1, Line 1 Cannot alter the login 'User01', because it does not exist or you do not have permission. So in a nutshell, ideally I'd like to reenable User01 somehow, just to get things back to where they used to be. Failing that, how can I force SSMS to log in using the Windows account name as it should be, rather than trying to use User01?

    Read the article

  • Mac OS X Lion Apache Server not Found

    - by Burak Erdem
    After upgrading to Lion 10.7.2 today, Apache virtual hosts are not working anymore. When I go to http://XYZ.localhost, it say "server not found". I am using Apache on my Mac OS X Lion and until today, it was working fine. I can access http://localhost but I can't access http://XYZ.localhost My /etc/hosts file is like below; 127.0.0.1 XYZ.localhost My /etc/apache2/extra/httpd-vhosts.conf file is like below; <VirtualHost *:80> ServerName XYZ.localhost DocumentRoot /Library/WebServer/Documents/XYZ <Directory /Library/WebServer/Documents/XYZ> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I think I once had this problem too, after another OS X update, but I can't remember how I solved it. Is it a user permission issue? Or is there something wrong with Apache or any other setting? EDIT: It seems like my /etc/hosts file is not working correctly. Even if I add something like 127.0.0.1 apple.com it still goes to the real apple.com. Maybe this might help to solve the problem.

    Read the article

  • How to edit known_hosts when several hosts share the same IP and DNS name?

    - by Frédéric Grosshans
    I regularly ssh into a computer which is a dual-boot OS X / Linux computer. The two OS instance do not share the same host key, so they can be seen as two host sharing the same IP and DNS. Let's say the IP is 192.168.0.9, and the names are hostname and hostname.domainname As far as I understood, the solution to be able to connect to the two host is to add them both to the ~/.ssh/know_hosts file. However, it is easier said than done, because the file is hashed, and has probably several entries per host (192.168.0.9, hostname, hostname.domainname). As a consequence, I have the following warning Warning: the ECDSA host key for 'hostname' differs from the key for the IP address '192.168.0.9' Is there an easy way to edit the known_hosts file, while keeping the hashes. For example, how can I find the lines corresponding to a given hostame? How can I generate the hashes for some known hosts? The ideal solution would allow me to connect to seamlessly to this computer with ssh, no matter whether I call it 192.168.0.9, hostname or hostname.domainname, nor if it uses its Linux hostkey or its OSX hostkey. However, I still want to receive a warning if there is a real man-in-the middle attack, i.e. if another key than these two is used.

    Read the article

  • How do I specify the emergency location in CDP?

    - by chrish
    In the LLDP-MED and Cisco Discovery Protocol whitepaper, it compares LLDP-MED and CDP. The part I am interested in is emergency location configuration. In LLDP-MED, I can specify the Emergency Line Indentification Number (ELIN) and that number will be used by some IP Phones (e.g. Aastra) when placing emergency calls. The whitepaper states: Location Identification Discovery This capability is important because it normally provides location information from the switch to the phone. (If the phone is configured with location information or can determine its location, then it may send this information to the switch. However, the real value is getting this information from the switch to the phone for phones that cannot determine their own location.) Location identification discovery allows the phone to be aware of its location-information that can be used for location-based applications on the phone. More importantly, this capability can be used to provide location information when making emergency calls. Both Cisco Discovery Protocol and LLDP-MED support the transportation of location information. However, LLDP-MED has more supported data formats than Cisco Discovery Protocol. I have found the documentation on how to set the location and associate the location to the interfaces for LLDP-MED. How is this done for CDP? Is ELIN supported for CDP?

    Read the article

  • Can I disable this Windows (XP) Security Warning?

    - by FumbleFingers
    I recently reformatted my hard drive and reinstalled Windows XP (I know I'll have to take the plunge and commit to Win8 "real soon, now", but I'm just not quite ready for the upheaval yet! :) I used to use WinRar (and later, when I got fed up with the "nag" messages, 7-Zip), but I haven't installed either of them in my new configuration, so I must be using the built-in XP facility when I open *.zip files. For years, I've been opening downloaded *.zip archives, and using "drag & drop" to copy to a File Explorer window open on the folder where I want the files to end up (usually, My Documents\Downloads). But now I find that when I "drop" the file(s), I get a pop-up Windows Security Warning saying Are you sure you want to copy or move files to this folder? You should only move or copy files from locations that you trust Can anyone explain why I'm getting this message, and is there any (reasonably easy, please! :) way to suppress it? Since I've already put the *.zip file on my computer, it seems a bit late to ask if I trust it. (Thus far, the files in question have always been plain text, so it's not a matter of dodgy programs, etc.) Apologies for the low quality image - I don't have the appropriate tools or knowledge to do any better, and it doesn't help that my "PrtScr" screen capture has included what would have been on my second monitor (TV) if it had been turned on. If you can't read it, trust me - I have copied the text verbatim.

    Read the article

  • Orphan IBM JVM process

    - by Nicholas Key
    Hi people, I have this issue about orphan IBM JVM process being created in the process tree: For example: C:\Program Files\IBM\WebSphere\AppServer\bin>wsadmin -lang jython -f "C:\Hello.py" Hello.py has the simple implementation: import time i = 0 while (1): i = i + 1 print "Hello World " + str(i) time.sleep(3.0) My machine has such JVM information: C:\Program Files\WebSphere\java\bin>java -verbose:sizes -version -Xmca32K RAM class segment increment -Xmco128K ROM class segment increment -Xmns0K initial new space size -Xmnx0K maximum new space size -Xms4M initial memory size -Xmos4M initial old space size -Xmox1624995K maximum old space size -Xmx1624995K memory maximum -Xmr16K remembered set size -Xlp4K large page size available large page sizes: 4K 4M -Xmso256K operating system thread stack size -Xiss2K java thread stack initial size -Xssi16K java thread stack increment -Xss256K java thread stack maximum size java version "1.6.0" Java(TM) SE Runtime Environment (build pwi3260sr6ifix-20091015_01(SR6+152211+155930+156106)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Windows Server 2003 x86-32 jvmwi3260sr6-20091001_43491 (JIT enabled, AOT enabled) J9VM - 20091001_043491 JIT - r9_20090902_1330ifx1 GC - 20090817_AA) JCL - 20091006_01 While the program is running, I tried to kill it and subsequently I found an orphan IBM JVM process in the process tree. Is there a way to fix this issue? Why is there an orphan process in the first place? Is there something wrong with my code? I really don't believe that my simplistic code is wrongly implemented. Any suggestions?

    Read the article

  • Program for remove exact duplicate files while caching search results

    - by John Thomas
    We need a Windows 7 program to remove/check the duplicates but our situation is somewhat different than the standard one for which there are enough programs. We have a fairly large static archive (collection) of photos spread on several disks. Let's call them Disk A..M. We have also some disks (let's call them Disk 1..9) which contain some duplicates which are to be found on disks A..M. We want to add to our collection new disks (N, O, P... aso.) which will contain the photos from disks 1..9 but, of course, we don't want to have any photos two (or more) times. Of course, theoretically, the task can be solved with a regular file duplicate remover but the time needed will be very big. Ideally, AFAIS now, the real solution would be a program which will scan the disks A..M, store the file sizes/hashes of the photos in an indexed database/file(s) and will check the new disks (1..9) against this database. However I have hard time to find such a program (if exists). Other things to note: we consider that the Disks A..M (the collection) doesn't have any duplicates on them the file names might be changed we aren't interested in approximated (fuzzy) comparison which can be found in some photo comparing programs. We hunt for exact duplicate files. we aren't afraid of command line. :-) we need to work on Win7/XP we prefer (of course) to be freeware TIA for any suggestions, John Th.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • Using Varnish (only) for DDoS mitigation

    - by Martin Kanters
    My VPS is suffering from a (D)DoS doing a SYN flood with spoofed IPs. I'm right now searching from ways how to be able to defend (at least a bit) against it. It's running a DirectAdmin apache2 webserver. Mainly used for serving PHP and MySQL. We are using CloudFlare, which are saying that they are able to mitigate (D)DoS at some level, now the attacker knows our real IP address, so CloudFlare isn't helping a bit. I've done some searching on the net and found out about enabling SYN cookies, to defend against it. I've checked my settings and it seems it was enabled all along. I've also read about that Varnish is able to defend against SYN flooding and Slowloris attacks, now I'm pretty interested in using that. The thing is that CloudFlare is already caching a lot from us, and I don't wish to spend too much resources on Varnish. Is it possible and smart to set up Varnish only for the better handling of requests? Are there perhaps better ways which I've missed? Thanks in advance, Martin

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >