Search Results

Search found 11553 results on 463 pages for 'disk utility'.

Page 435/463 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Windows Server 2003 Standard R2 CD 1 cannot boot: freeze at No Emulation

    - by TGP1994
    Hi everyone. I've been interested in the Windows Server line of OSes, so since I apply for DreamSpark, I thought I'd go download it and try it. I just so happened to have an old desktop that I was using awhile ago for Windows XP, so I imaged the drive in preparation for it to be overwritten with the new OS. (This system has an Asus A7V8X-X motherboard, an AMD Athlon XP 2800+ processor, and 1GB of RAM.) I tried burning the first disk image on my newer desktop computer, running Windows XP, although the CD burner consistently failed at a particular track area from cd to cd, so it seemed like the burner was toast there. Fortunately, I had a laptop, so I transferred the images over to that, then burned the first disc there. First time around went great, and the burning program reported no errors. I then took the CD over to the computer that I was intending to install Server onto, set the BIOS to boot from the CD drive, then I booted it up. Like normal, after the POST, it printed "Boot from ATAPI CD-Rom: No Emulation", which I was used to seeing with bootable cds. I waited for the "Press any key to continue..." message that I had become so familiar with in windows discs, although I saw none. The computer sat there for about 5 seconds with the cd spinning, then it spun down like it was done reading it. Nothing else happened. No response from the keyboard. I tried again, same result. I then downloaded IMGBurn, and I put the burned cd into the laptop that burned it originally. I also downloaded a fresh image from the dreamspark site. I ran a verify session, and everything checked out. I later tried getting various DOS startup discs, then I tried booting the winnt binary, which supposedly initiates the installation process. Either the shells reported that not enough memory was available (since they would be running in low memory mode), or FreeDOS in particular would report Illegal instructions right away. Is the image corrupt at dreamspark, or am I doing something wrong?

    Read the article

  • Logitech Optical Mouse Frozen In Middle of Windows XP Pro Screen

    - by Code Sherpa
    Hi. I have a Logitech Optical Mouse/Keyboard. I have been using them just fine with the system drivers for almost a year now. I recently updated my Kaspersky software and rebooted. Now the mouse is frozen in the middle of my screen. I am not able to login to the Windows XP Pro box that has the frozen mouse (because i can't work the mouse) but am able to remote desktop to this computer. Things I know / have tried: When I boot on the problem computer, I am able to use the keyboard, but not the mouse. I have installed the latest version of Logitech's SetPoint (with the updated drivers) on the problem computer (via remote desktop) and that didn't seem to matter. I bought new batteries for the mouse and that didn't matter. I have tried the mouse/keyboard on another computer and the mouse works just fine there. My suspicion is that the Kaspersky install has overwritten a driver of some sort. Things I have not done (and would appreciate detailed steps if you feel this is the way to go): 1) Uninstalled all the mouse drivers on the machine and reboot. Then, reinstall. Note: When I get to the Device Manager I don't see an option for Human Interface Devices (where the mouse device is). Here are my options: Computer, Disk Drives, DVD/CD-Rom drives, Floppy controllers, IDE ATA/ATAPI, Imaging devices, Network Adapters, Other devices, Ports, Processors, Sound, video, and gaming, System devices, USB controllers. Also, I should point out that Video Controller is the only thing under Other devices and it has a yellow exclamation mark. The same is true for all the items under Universal Serial Bus controllers. I think this means I have to update my BIOS but, since my mouse was working just fine without doing that, I don't think that is my problem. So, how do I get to my Mouse Device? 2) Update my BIOS. Note: As pointed out above, I don't think this matters as my mouse was working just fine under my computer's current BIOS version. Thanks for your help.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • Melting Laptop Power Supply Tip

    - by AlReece45
    Several (6-7) months ago, my laptop power supply cord got a cut in it and stopped working. Having gotten cheap (and short) power supplies in the past, I decided to buy 2 brand new ones from the manufacturer (ASUS). Now, I used my laptop a little less than usual between February and March. During that time I noticed a few times that the power supply, even though plugged in, did not provide power. Often the computer would just off on me. I figured it was just that one power supply being bad. I had left the alternate at my parent's house in another state and asked them to ship it to me. Now, at work the other day I wanted to get a file off the of hard disk. So I booted it up, knowing that it had a low battery, plugged it in. During the first 2 minutes of use, I was told that the battery was low and I should plug it in. I unplugged it, inspected the end (Being plugged in, this was suspicious), and decided I shouldn't plug it back in-- the plastic on the tip was melting from the heat of the metal on the tip. The computer had simply booted up and I had the file-manager open. It had not been on for more than 10 hours. Now I know that computers tend to get pretty hot. However, the melting point of plastic is usually above 200C.. so that's much hotter than the computer should be generating. I went and bought a THIRD power supply. This time a universal one from Best Buy (it was very fast to buy and test). I tried it out on the computer and it's tip is melting as well. My older laptop that uses the universal power supply uses it perfectly (has been about a week and a part of use now). I have tried using the computer without the battery, with the same effect. Obviously, this is not a problem with the power supply. My room mate and I being trained computer techs were contemplating taking the computer apart and desoldering and resoldering on the power tip. (The computer is about 6 months out of its 2-year warranty). We're hoping that will correct the issue as I would prefer to devote my money on a Good Desktop rather than yet ANOTHER $1200+ laptop. Is there any thing I'm missing here that might cause the the tip on the power unit to melt?

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • I can't delete a directory inside a junctioned directory

    - by Fredy Muñoz
    So this is the deal. A couple of days ago I moved my profile folder C:\Documents and Settings\fmunoz to a different drive D:\fmunoz. Today, I created a directory in my desktop using the point-and-click method: Right-click on an empty space in the desktop Select New Select Folder Leave the default name New Folder and press Enter I tried to delete the folder using the point-and-click method: Right-click the New Folder directory Select Delete After five seconds, I got the following message: --------------------------- Error Deleting File or Folder --------------------------- Cannot delete New Folder: Access is denied. Make sure the disk is not full or write-protected and that the file is not currently in use. --------------------------- Initially I thought that there must be some sort of indexing services locking the directory so I got a list of open files using the TuneUp Process Manager tool but the New Folder directory wasn't there. I double-clicked My Computer, navigated to the desktop directory C:\Documents and Settings\fmunoz\Destkop, tried to delete the New Folder directory using the same point-and-click method described above and got exactly the same message at the same amount of time. In the same window, I navigated to the actual location of the desktop directory D:\fmunoz\Desktop, tried to delete the New Folder directory and this time it worked. I thought that this behavior was due to some special treatment that Windows gives to the desktop or the profile directories so I tried doing the same thing with a different set of directories: Created a folder D:\dummy Created a junction C:\dummy pointing to D:\dummy Created a New Folder directory in C:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I tried creating the folder in the actual directory rather than the junction directory: Created a New Folder directory in D:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I also tried using the Delete button instead of using the Delete option of the context menu but it didn't work. When using the Shift+Delete sequence, it works. It also works by using the rd command in the console, but in both cases the deleted directory doesn't goes to the Recycle Bin, which is my intention when using the Delete context menu option or the Delete button.

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • vmdk to live cd - VMware vmxnet virtual NIC driver Kernel panic

    - by ronalchn
    Task I am trying to convert a virtual machine to a live CD. Specifically, the virtual machine I am trying to convert is the IOI 2013 Competition Environment. In this task, I am aided by a guide Converting a virtual disk image: VDI or VMDK to an ISO you can distribute. Symptoms However, after getting through all the instructions, the live CD causes a kernel panic on boot on bare metal. In particular, the screen shows: [0.737348] cdrom: Uniform CD-ROM driver Revision: 3.20 [0.737503] sr 3:0:0:0: >Attached scsi CD-ROM sr0 [0.737638] sr 3:0:0:0: >Attached scsi generic sg2 type 5 [0.737771] Freeing unused kernel memory: 756k freed [0.738093] Write protecting the kernel text: 5960k [0.738155] Write protecting the kernel read-only data: 2424k [0.738224] NX-protecting the kernel data: 4280k Loading, please wait... [0.752252] udevd[100]: starting version 175 [0.768708] VMware vmxnet3 virtual NIC driver - version 1.1.29.0-k-NAPI [0.781204] VMware PVSCSI driver - version 1.0.2.0-k [0.789555] VMware vmxnet virtual NIC driver [0.799356] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000200 [0.799356] [0.799472] Pid: 1, comm: init Tainted: G 0 3.5.0-17-generic #28-Ubuntu [0.799549] Call Trace: [0.799603] [<c15bf0ec>] panic+0x81/0x17b [0.799654] [<c104a6a5>] do_exit+0x745/0x7a0 [0.799707] [<c104a9a4>] do_group_exit+0x34/0xa0 [0.799760] [<c104aa28>] sys_exit_group+0x18/0x20 [0.799813] [<c15cff5f>] sysenter_do_call+0x12/0x28 Possible problem I suspect that the problem is the VMware vmxnet virtual NIC driver - however, I do not know how I can uninstall it, and possibly install one for a bare metal machine. If anyone knows which packages needs installing/uninstalling at the .rootfs/ chroot directory stage, please let me know. Details on procedure Do note that after importing the .ova file into Virtualbox, the virtual machine is stored as a .vmdk file already, and not a .vdi file. I would like to point out some results of the procedure followed in case of any questions. This is after extracting the filesystem from the .raw file to the .rootfs/ directory mentioned in the blog. I changed the filesystem table as mentioned in the blog, then looked at the possible "kernel optimized for virtualization". However, I found that linux-image-generic was already installed. Also, when running the command dpkg-query --showformat='${Package}\n' -W 'vmware-tools*' (or dpkg-query --showformat='${Package}\n' -W '*-virtual'), no packages were found. Thus, I did not find any virtualization specific packages. I proceeded to generate the iso following the steps in the blog, and burned it to a DVD.

    Read the article

  • Concerning persistence size in the Linux Live Creator

    - by user63085
    Message : Hello everyone! I have ,for the last several months, used the Linux Live USB Creator which it is a very useful app to make portable OS on to flash drives. I mostly use this application to test and try out new OS's as they are released, before I decide to make a hard disk installatio on to the computer. In many cases, the application developers will allow the “persistence” feature in the flash-drive-installed OS, which is just another way of saying that after multiple boot-ups and shutdowns, all the changes made to the OS will be saved in the flash-drive. But I have a question about the limit of the Persistence size in Linux Live USB Creator (currently version 2.6). I install Super OS 10 on to a partition on my external drive which has 30 GB. I wanted to reserve 10 GB for the persistence so that I can install more applications and space will not run out as I update the installed applications or when I do system updates. But why is it that only 3950 MB can be put for persistence? It would be great if, when desired, as much more persistence space could be set aside so that the space will not run out soon. Also, as I have installed the OS on a 30 GB drive, I tried to see how much space is left. But it seems only the remaining of the Persistence space is displayed when I click on the File System folder. For example, after I have just installed it now, there is 3.5 GB of free space. Where can I access the remaining 26 GB or so drive space which is in the same drive? How do I access it Sir?? It would be helpful if any one could explain and help me with this. Most importantly, it would be a big relief if the persistence can be somehow expanded by a work-around so that I can continue using my SuperOS 10.04 (now heavily customized) OS, which unfortunately has just over 576 MB of space left now, after I removed OpenOffice.org and installed the Libre Office earlier today. This is what remains from the maximum allowable 3950 MB of space for persistence at set-up. Thanks in advance!

    Read the article

  • Possible HDD malfunction. Need help in diagnosing

    - by Protheus
    Today when using my PC as I did for almost 4 years I experienced the following: during opening new tab in Opera browser screen froze. Music (AIMP 3) continued to play for about 5 minutes and then stopped too. I tried Ctrl+Alt+Del, but win7 lock screen didn't appear. Caps\Scroll or Num locks didn't switch diodes on keyboard. I rebooted my PC and saw that BIOS suggests me to enter it's settings or load by default. I chose default. It don't see proper boot device (old faitful "insert proper boot" something). After second reboot it said that there is no ExpressGate installed (which i turned off in BIOS years ago). I went into BIOS setting to turn off ExpressGate and see configs: time was not set off, all hard drives present, temp and O.C. settings are nominal (no O.C.) I've inserted my Win7 install disk to try recovery. It did load awfully long (about few minutes) and didn't see current installation. PC was utilized in 24/7 mode for almost all these years. Hardware configuration: ASUS P5Q WS Core 2 Quad Q9300 (2.5GHz no O.C.) MSI geForce GTX 460 4x2 Gb GeIL EVO 2 (AFAIR) Seagate something 750Gb (4 years as system HDD 24/7) WD 1Tb (for random stuff, 5 y.o.) Hitachi 500Gb (for even more random stuff, 6 y.o.) NEC DVDRW (ALL DISKS ARE SATA) Cooler Master Silent Pro 700W Software: Windows 7 AND Kubuntu on the same drive with GRUB loader. Sorry I can't remember HDDs and can't see them right now, but I think their models aren't relevant anyway. My idea is that due to some system error or hard drive glitch i've wrecked my primary HDD's MBR. Nevertheless I don't exclude the possibility of other failure. May it's be that motherboard or it's SATA controller? Doubt it, because all drives are seen in BIOS and I could load from DVD. Maybe GRUB got bugged somehow, although I don't see how it's possible from Windows. But I did install KUbuntu from Windows (i wasn't myself then), maybe GRUB did write itself in some windows partition and got rewriteen in process? Right now I am at work with my flash drive with me and I need some advice how to fix MBR or to hear if it's not MBR. I'm going to buy new HDD (Hitachi 7k2000) because I think that my current HDD is compromised and it's unsafe to use it as system drive, especially 24/7.

    Read the article

  • Achieving A 1600 x 900 Resolution For (Guest OS) Ubuntu Under (Host OS) Windows 7

    - by panamack
    I am running Ubuntu 11.10 as a guest OS using VirtualBox 4.1.16 installed on Windows 7 Ultimate. On my laptop I'd like to be able to run Ubuntu in full screen mode at 1600 x 900. I only have options within the virtual machine to select 4:3 display settings such as 1600 x 1200, 1440 x 1050 etc. I have guest additions installed. At the windows command prompt, I tried typing: VBoxManage setextradata "Virtual Ubuntu Coursera ESSAAS" "CustomVideoMode1" "1600x900x16" This didn't work, still no 1600 x 900 res available in Ubuntu. I tried this having read the following section of the VirtualBox help (this also says something about a 'video mode hint feature' not sure what this means): 9.7. Advanced display configuration 9.7.1. Custom VESA resolutions Apart from the standard VESA resolutions, the VirtualBox VESA BIOS allows you to add up to 16 custom video modes which will be reported to the guest operating system. When using Windows guests with the VirtualBox Guest Additions, a custom graphics driver will be used instead of the fallback VESA solution so this information does not apply. Additional video modes can be configured for each VM using the extra data facility. The extra data key is called CustomVideoMode with x being a number from 1 to 16. Please note that modes will be read from 1 until either the following number is not defined or 16 is reached. The following example adds a video mode that corresponds to the native display resolution of many notebook computers: VBoxManage setextradata "VM name" "CustomVideoMode1" "1400x1050x16" The VESA mode IDs for custom video modes start at 0x160. In order to use the above defined custom video mode, the following command line has be supplied to Linux: vga = 0x200 | 0x160 vga = 864 For guest operating systems with VirtualBox Guest Additions, a custom video mode can be set using the video mode hint feature. UPDATE 02.06.12 I've just tried creating a new virtual machine using the same original disk image I had been given. This had Guest Additions v 4.1.6 installed and provided me with the 1600 x 900 full screen display I want. It's after I then install Guest Additions v 4.1.16 (the version included with my VirtualBox installation) that my only choices are 4:3 displays e.g. 1600 x 1200. Seems this is the cause.

    Read the article

  • MySQL performance over a (local) network much slower than I would expect

    - by user15241
    MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace). I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box. Here are the results of my database query times: Production: Executed 291 queries in 320.33 milliseconds. (homepage) Executed 517 queries in 999.81 milliseconds. (content page) Development: Executed 316 queries in 46.28 milliseconds. (homepage) Executed 586 queries in 79.09 milliseconds. (content page) As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server. Why is this?! One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes). What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved. I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • How can I minimize the amount my router slows down my Internet connection speed?

    - by Lord Torgamus
    Background I'm working with what I assume is a pretty common Internet setup: a cable modem, a wireless router and a few Internet-connected devices. Lately, I've started being more demanding on my Internet connection, and noticed that using my router slows down my download speeds considerably. I just kind of dealt with it until Zune Marketplace on the Xbox 360 told me that a movie was going to take well over ten hours to download, and I just didn't want to wait that long. Good little scientist that I am, I tried to reduce the problem down to one variable. The test As a control, I turned off all the devices in the house that use wireless Internet, and unplugged all the wired devices except for the Xbox. I also power-cycled both the modem and the router. I then tried to download the movie again, and was told that it would still take over ten hours. Next, I unplugged the router, and connected the Xbox directly to the modem. The movie downloaded in just over one hour. As far as I can tell, this means that my ISP, other cable users near me, the remote servers, anything wireless-related and my machines' disk speeds can't be at fault. A similar experiment that replaced the Xbox with a wired laptop produced similar results. To me, this says "the router is responsible for things taking around ten times longer to download." My question I'd still prefer to use the router for a few reasons: it's a pain to connect and disconnect everything every time there's a big file to download direct connection to the modem isn't good for security only one machine can be connected directly to the modem at a time What can I do to have fast connection speeds while still using the router? I don't mind turning other machines off, as long as I don't have to mess with power and ethernet cables. EDIT : After asking this followup question and then this one, I installed dd-wrt on my router, and I seem to be getting higher and more consistent speeds. Perhaps more importantly, my memory use is fairly constant. I know this isn't an answer — which is why I'm not posting it as an answer — but it is how I resolved the situation, and hopefully it'll be helpful for someone.

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • ASP .NET 2.0 C# AjaxPro RegisterTypeForAjax

    - by Dan7el
    I am wondering if RegisterTypeForAjax isn't working correctly. I am getting the error noted at the end of the code block below. Sample is from here: http://www.ajaxtutorials.com/asp-net-ajax-quickstart/tutorial-introduction-to-ajax-in-asp-net-2-0-and-c/ Any ideas as to why I'm getting this error? Thanks. ASP .NET 2.0 C# Here is the code-behind: using System; using System.Data; using System.Data.SqlClient; using System.Configuration; using System.Collections; using System.Collections.Generic; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using AjaxPro; namespace WebApplication1 { public partial class Ajax_CSharp : System.Web.UI.Page { protected override void OnInit( EventArgs e ) { base.OnInit( e ); Load += new EventHandler( Page_Load ); } protected void Page_Load( object sender, EventArgs e ) { Utility.RegisterTypeForAjax( typeof( Ajax_CSharp ) ); } [ AjaxMethod( HttpSessionStateRequirement.ReadWrite ) ] public string GetData() { // method gets a row from the db and returns a string. } } Here is the ASPX page: <%@ Page Language="C#" AutoEventWireup="false" CodeBehind="Default.aspx.cs" Inherits="WebApplication1.Ajax_CSharp" % Untitled Page function GetData() { var response; Ajax_CSharp.GetData( GetData_CallBack ); } function GetData_CallBack( response ) { var response = response.value; if ( response == "Empty" ) { alert( "No Record Found." ); } else if ( response == "Error" ) { alert( "An Error Occurred in Accessing the Database !!!" ); } else { var arr = response.split( "~" ); var empID = arr[0].split( "," ); var empName = arr[1].split( "," ); document.getElementById( 'dlistEmployee' ).length = 0; for ( var i = 0; i < empID.Length; i++ ) { var o = document.createElement( "option" ); o.value = empID[i]; o.text = empName[i]; document.getElementById( 'dlistEmployee' ).add( o ); } } } function dodisplay() { var selIndex = document.getElementById( "dlistEmployee" ).selectedIndex; var empName = document.getElementById( "dlistEmployee" ).options( selIndex ).text; var empID = document.getElementById( "dlistEmployee" ).options( selIndex ).value; document.getElementById( "lblResult" ).innerHTML = "You have selected " + empName + " (ID: " + empID + " )"; } </script>    Run it and click on the button and I get this error: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; MS-RTC LM 8) Timestamp: Mon, 26 Apr 2010 17:22:44 UTC Message: 'Ajax_CSharp' is undefined Line: 13 Char: 11 Code: 0 URI: http://localhost:4678/Default.aspx

    Read the article

  • QtOpenCl make errors. Please help.

    - by Skkard
    So I downloaded the ATI Stream SDK. I don't have a gpu now so I use the '-device cpu' and got the programs/examples in the OpenCl directory working by adding the directory to LD_LIBRARY_PATH etc. Now the problem is when installing QtOpenCl. configure script gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ ./configure This is the QtOpenCL configuration utility. Qt version ............. 4.6.2 qmake .................. /usr/bin/qmake OpenCL ................. yes OpenCL/OpenGL interop .. yes Extra QMAKE_CXXFLAGS ... Extra INCLUDEPATH ...... Extra LIBS ............. -lOpenCL QtOpenCL has been configured. Run '/usr/bin/make' to build. Make gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ make cd src/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' cd openclgl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src' cd examples/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' cd vectoradd/ && make -f Makefile make[3]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' g++ -o vectoradd vectoradd.o qrc_vectoradd.o -L/usr/lib -L../../../lib -L../../../bin -lQtOpenCL -lQtGui -lQtCore -lpthread ../../../lib/libQtOpenCL.so: undefined reference to `clBuildProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clSetCommandQueueProperty' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueNDRangeKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clSetKernelArg' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBufferToImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clFinish' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueUnmapMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clGetMemObjectInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMarker' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetCommandQueueInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseContext' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clUnloadCompiler' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueBarrier' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramBuildInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage3D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetContextInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSamplerInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSupportedImageFormats' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventProfilingInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetImageInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithBinary' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelWorkGroupInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clFlush' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteImage' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernelsInProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithSource' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage2D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImageToBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelInfo' collect2: ld returned 1 exit status make[3]: *** [vectoradd] Error 1 make[3]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' make[2]: *** [sub-vectoradd-make_default] Error 2 make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' make[1]: *** [sub-opencl-make_default] Error 2 make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples' make: *** [sub-examples-make_default-ordered] Error 2 Tried it using the '-no-openclgl', but none of the examples etc are compiled. I'm using ubuntu 10.04 using the Qt which is installed from synaptic.

    Read the article

  • HOWTO: Migrate SharePoint site from one farm to another?

    - by Ramiz Uddin
    Hi Everyone, I've a site deployed on a developed machine. The site was developed under WSS 3.0 which contains custom List, Features, Templates, Styles etc. What I've to do is to create a deployment package (setup) which I can give away to my client. I know about stsadm but I don't have the access of the production machine. Is there a way I can package all the dependencies in a single file (installation file) and run on the server which will include all the dependencies (including site content)? I've tried to experiment this with SharePoint Content Deployment Wizard. It all went well when Export the site but always fail to Import with the following message: [2/2/2010 3:43:25 PM]: Start Time: 2/2/2010 3:43:25 PM. [2/2/2010 3:43:25 PM]: Progress: Initializing Import. [2/2/2010 3:43:42 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:43:48 PM]: Progress: Import Completed. [2/2/2010 3:43:48 PM]: Finish Time: 2/2/2010 3:43:48 PM. [2/2/2010 3:43:48 PM]: Completed with 0 warnings. [2/2/2010 3:43:48 PM]: Completed with 1 errors. [2/2/2010 3:44:51 PM]: Start Time: 2/2/2010 3:44:51 PM. [2/2/2010 3:44:51 PM]: Progress: Initializing Import. [2/2/2010 3:45:08 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:45:14 PM]: Progress: Import Completed. [2/2/2010 3:45:14 PM]: Finish Time: 2/2/2010 3:45:14 PM. [2/2/2010 3:45:14 PM]: Completed with 0 warnings. [2/2/2010 3:45:14 PM]: Completed with 1 errors. [2/2/2010 3:56:17 PM]: Start Time: 2/2/2010 3:56:17 PM. [2/2/2010 3:56:17 PM]: Progress: Initializing Import. [2/2/2010 3:56:34 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:56:39 PM]: Progress: Import Completed. [2/2/2010 3:56:39 PM]: Finish Time: 2/2/2010 3:56:39 PM. [2/2/2010 3:56:39 PM]: Completed with 0 warnings. [2/2/2010 3:56:39 PM]: Completed with 1 errors. I actually couldn't find a good reference on how to use it. But, this software doesn't something I'm looking for which can create a simple deployment package (after that you don't need to do anything). I might not be correct but after two days of googling I think there is no such utility (freeware) that can create a simple package of a site and install on other farm without even need to configure anything before you run the installation package. You people might have an advise which can help me to look/think outside the box and get to the solution quickly instead adding more days working on the problem. Please, share only freewares. I can't afford to buy anything. I'm waiting to be surprised with a good share :) Have a good day! Thanks.

    Read the article

  • How to create an array of User Objects in Powerbuilder?

    - by TomatoSandwich
    The application has many different windows. One is a single 'row' window, which relates to a single row of data in a table, say 'Order'. Another is a 'multiple row' datawindow, where each row in the datawindow relates to a row in 'Order', used for spreadsheet-like data entry Functionality extentions have create a detail table, say 'Suppliers', where an order may require multiple suppliers to fill the order. Normally, suppliers are not required, because they are already in the warehouse (0), or there may need to be an order to a supplier to complete an order (1), or multiple suppliers may need to be contacted (more than one). As a single order is entered, once the items are entered, a User Object is populated depending on the status of the items in the warehouse. If required, this creates a 1-to-many relationship between the order and the "backorder". In the PB side, there is a single object uo_backorder which is created on the window, and is referenced by the window depending on the command (button popup, save, etc) I have been tasked to create the 'backorder' functionality on the spreadsheet-line window. Previously the default options for backorders were used when orders were created from the multiple-row window. A workaround already exists where unconfirmed orders could be opened in the single-row window, and the backorder information manipulated there. However, the userbase wants this functionality on the one window. Since the functionality of uo_backorder already exists, I assumed I could just copy the code from the single-order window, but create an array of uo_backorder objects to cope with multiple rows. I tried the following: forward .. type uo_backorder from popupdwpb within w_order_conv end type end forward global type w_order_conv from singleform .. uo_backorder uo_backorder end type type variables .. uo_backorder iuo_backorders[] end variables .. public function boolean iuo_backorders(); .. long ll_count ll_count = UpperBound(iuo_backorders[]) iuo_backorders[ll_count+1] = uo_backorder //THIS ISN'T RIGHT lb_ok = iuo_backorders[ll_count+1].init('w_backorder_popup', '', '', '', 'd_backorder_popup', sqlca, useTransObj()) return lb_ok end function .. <utility functions> .. type uo_backorder from popupdwpb within w_order_conv integer x = 28 integer y = 28 integer width ... end type on uo_backorder.destroy call popupdwpb::destroy end on The issue I face now is that the code commented "THIS ISN'T RIGHT" isn't correct. It is associating the visual object placed on the face of the main window to each array cell, so anytime I reference the array cell object it's actually referencing the one original object, not the new instances that I (thought) I was creating. If I change the code iuo_backorders[ll_count+1] = create uo_backorder the code doesn't run, saying that it failed to initalize the popup window. I think this is related to the class being called the same thing as the instance. What I want to end up with is an array of uo_backorder objects that I can associate to each row of my datawindow (first row = first cell, etc). I think the issue lays in the fact it's a visual object, and I can't seem to get the window to run without adding a dummy object on the face of the window (functionality from the original single-row window). Since it's a VISUAL object, does the object indeed need to be embedded on the windowface for the window to know what object I'm talking about? If so, how does one create multiple windowface objects (one to many, depending on when a row is added)? Don't hesitate to inquire regarding any more information this issue may require from myself. I have no idea what is 'standard' or 'default' in PB, or what is custom and needs more explaining.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >