Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 772/1338 | < Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >

  • PASS: Total Registrations

    - by Bill Graziano
    At the Summit you’ll see PASS announce the total attendance and the “total registrations”.  The total registrations is the sum of the conference attendees and the pre-conference registrations.  A single person can be counted three times (conference plus two pre-cons) in the total registration count. When I was doing marketing for the Summit this drove me nuts.  I couldn’t figure out why anyone would use total registrations.  However, when I tried to stop reporting this number I got lots of pushback.  Apparently this is how conferences compare themselves to each other.  Vendors, sponsors and Microsoft all wanted to know our total registration number.  I was even asked why we weren’t doing more “things” that people could register for so that our number would be even larger.  This drove me nuts. I understand that many of you are very detail oriented.  I just want to make sure you understand what numbers you’re seeing when we include them in the keynote at the Summit.

    Read the article

  • Looking at desktop virtualization, but some users need 3D support. Is HP Remote Graphics a viable solution?

    - by Ryan Thompson
    My company is looking at desktop virtualization, and are planning to move all of the desktop compute resources into the server room or data center, and provide users with thin clients for access. In most cases, a simple VNC or Remote Desktop solution is adequate, but some users are running visualizations that require 3D capability--something that VNC and Remote Desktop cannot support. Rather than making an exception and providing desktop machines for these users, complicating out rollout and future operations, we are considering adding servers with GPUs, and using HP's Remote Graphics to provide access from the thin client. The demo version appears to work acceptably, but there is a bit of a learning curve, it's not clear how well it would work for multiple simultaneous sessions, and it's not clear if it would be a good solution to apply to non-3D sessions. If possible, as with the hardware, we want to deploy a single software solution instead of a mishmash. If anyone has had experience managing a large installation of HP Remote Graphics, I would appreciate any feedback you can provide.

    Read the article

  • Lustre - is this bad form?

    - by ethrbunny
    Im going to be consolidating several 'server rooms' into a single installation soon. Part of this effort will be finding a home for 5Tb (and growing) of files / logs. To this end Im looking at Lustre and appreciating its ability to scale. The big vendors want to sell me a $20K SAN to manage this but Im wondering about buying several iSCSI units (like this http://www.asacomputers.com/3U-iSCSI-Solution.html) and using VMs for the OSS machines. This would let me fail-over to cover problems and not require a dedicated system for each OSS. Given articles like this (http://h30565.www3.hp.com/t5/Feature-Articles/RAID-Is-Dead-Long-Live-RAID/ba-p/1422) that talk about how RAID is not keeping up with drive density Im leaning towards more disks with lower capacity each. Again - some akin to the iSCSI array above. Tell me why this is a terrible idea. Do I really need to invest in a PE710 for each OSS/OST?

    Read the article

  • options for deploying application

    - by terence
    I've created a simple web application, a self-contained tool with a user system. I host it publicly for everyone to use, but I've gotten some requests to allow companies to host the entire application privately on their internal systems. I have no idea what I'm doing - I have no experience with deployment or server stuff. I'm just some person who learned enough JS and PHP to make a tool for my own needs. The application runs with Apache, MySQL, and PHP. What's the best way to package my application to let others run it privately? I'm assuming there's better options than just sending them all the source code. I'd like to find a solution that is: Does not require support to set up (I'm just a single developer without much free time) Easy to configure Easy to update Does there exist some one-size-fits all thing that I can give to someone, they can install it, and bam, now when they go to http://myapplication/ on their intranet, it works? Thanks for your help.

    Read the article

  • Get the interface and ip address used to connect to a specific host (ip)

    - by umop
    I'm sure this has been asked and answered before, but I wasn't able to find it, so hopefully this will at least link someone to the right place. I want to find out my local interface and ip address used to reach a certain host. For instance, if I had 3 adapters connected to my box and they all three went to different networks, I'd like to know which of the three (specifically, its ip address) is used to reach my.local.intranet (in this case, it would be a vpn tunnel interface). I suspect this is a job for ifconfig or traceroute, but I haven't been able to find the correct switches. I'm running OSX 10.7 (Darwin) Thanks!

    Read the article

  • Is it effective installing firewall within same machine which offering service?

    - by Eonil
    I'm a starting a small service practically. And I have single server currently. No money to purchase separated/dedicated firewall equipment now. Is it effective installing firewall software on same machine which offering internet service? My server will offer HTTP, NFS, and SSH, and custom made server software on a several ports. (edit) All services (except NFS) should be open to internet. Not internal services. I guess my machine (virtualized within Xen) is connected to the internet directly because I can connect to my machine SSH with only IP address. (edit) NFS is not open to internet. Sorry for my mistake. NFS will be served via SSH only.

    Read the article

  • Preferred way for dealing with customer-defined data in enterprise application

    - by Axarydax
    Let's say that we have a small enterprise web (intranet) application for managing data for car dealers. It has screens for managing customers, inventory, orders, warranties and workshops. This application is installed at 10 customer sites for different car dealers. First version of this application was created without any way to provide for customer-specific data. For example, if dealer A wanted to be able to attach a photo to a customer, dealer B wanted to add e-mail contact to each workshop, and dealer C wanted to attach multiple PDF reports to a warranty, each and every feature like this was added to the application, so all of the customers received everything on new update. However, this will inevitably lead to conflicts as the number of customers grow as their usage patterns are unique, and if, for instance, a specific dealer requested to have an ability to attach (for some reason) a color of inventory item (and be able to search by this color) as a required item, others really wouldn't need this feature, and definitely will not want it to be a required item. Or, one dealer would like to manage e-mail contacts for their employees on a separate screen of the application. I imagine that a solution for this is to use a kind of plugin system, where we would have a core of the application that provides for standard features like customers, inventory, etc, and all of the customer's installed plugins. There would be different kinds of plugins - standalone screens like e-mail contacts for employees, with their own logic, and customer plugin which would extend or decorate inventory items (like photo or color). Inventory (customer,order,...) plugins would require to have installation procedure, hooks for plugging into the item editor, item displayer, item filtering for searching, backup hook and such. Is this the right way to solve this problem?

    Read the article

  • Avoid cache overflow in Atempo LiveBackup

    - by Vebjorn Ljosa
    When attempting the initial backup of a new client, Atempo LiveBackup seems to require a very large cache. For instance, a 20 GB cache is not enough to back up a computer that has 100 GB of data. It appears that LiveBackup is adding new files to the cache at a faster rate than it can send them to the server. When the cache fills up, the backup fails. Aside from removing most data from the computer and then add them back gradually after the initial backup, is there a good workaround? Is it possible to make LiveBackup slow down its scan so as to not fill the cache? Or is it possible to place the cache on an external drive?

    Read the article

  • How to disabled password authentication for specific users in SSHD

    - by Nick
    I have read several posts regarding restricting ALL users to Key authentication ONLY, however I want to force only a single user (svn) onto Key auth only, the rest can be key or password. I read How to disable password authentication for every users except several, however it seems the "match user" part of sshd_config is part of openssh-5.1. I am running CentOS 5.6 and only have OpenSSH 4.3. I have the following repos available at the moment. $ yum repolist Loaded plugins: fastestmirror repo id repo name status base CentOS-5 - Base enabled: 3,535 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 6,510 extras CentOS-5 - Extras enabled: 299 ius IUS Community Packages for Enterprise Linux 5 - x86_64 enabled: 218 rpmforge RHEL 5 - RPMforge.net - dag enabled: 10,636 updates CentOS-5 - Updates enabled: 720 repolist: 21,918 I mainly use epel, rpmforge is used to the latest version (1.6) of subversion. Is there any way to achieve this with my current setup? I don't want to restrict the server to keys only because if I lose my key I lose my server ;-)

    Read the article

  • Should I format USB sticks and SD cards to FAT, FAT32, exFAT or NTFS? (Windows files, live Linux distors)

    - by superuser
    Does it depend on the media size which one to chose or on some other parameters? In Windows 7 FAT16 is the default. In pendrivelinux.com's Universal USB Installer FAT32. Which one to chose? How about NTFS for Windows use? How about exFAT? It is tne Microsoft designed filesystem for removable media. Is there a difference in USB sticks and SD cards in this regard? Edit: seeing developments in the other thread, should I still use something like exFAT if I don't want Recycle bins created on every single machine I plug my USB thumb drive in?

    Read the article

  • startup cassandra layout

    - by davidkomer
    We've got a relatively low-traffic site (~1K pageviews/day) hosted on a single server, and expect it to grow significantly over the next few years. I'm thinking of moving over to Rackspace CloudServer or EC2 and firing up 3 nodes (all on CentOS): 2 x Web (Apache) - with loadbalancer 1 x MySQL (for the Wordpress powered part) The question is where to put Cassandra right now... Should it sit on each Web node, or the MySQL node? My thought right now is to put it on Web nodes. It's my understanding that Cassandra has the benefits of fault-tolerance (i.e. if we take a node down, the site is still operational). So even with only 2 nodes, we'd have that benefit as opposed to just putting it on the MySQL node. Also, as we scale up and add another node, a cassandra instance can come along with it and the php can always run its queries on localhost. Is this a good idea?

    Read the article

  • Need help upgrading MacBookPro3,1 RAM to 4GB.

    - by Fantomas
    My questions are: 1) Where to buy it and what to buy? I have heard that this RAM is generic enough and it does not have to come from Apple. 2) Can I reuse my existing stick(s)? Would I have a single 2GB module, or 2 x 1GB modules? 3) If I have 2GB already, is it a good idea to have one old stick and one new one? Which one is better placed at the top and which one at the bottom? Let me know what questions you have. My computer's info: Hardware Overview: Model Name: MacBook Pro Model Identifier: MacBookPro3,1 Processor Name: Intel Core 2 Duo Processor Speed: 2.4 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache: 4 MB Memory: 2 GB Bus Speed: 800 MHz Boot ROM Version: MBP31.0070.B07 SMC Version (system): 1.16f11

    Read the article

  • excel - merge cells including a zip code

    - by evanmcd
    Hi all, I have the need to merge a bunch of cells that comprise an address (street, city, state, zip) into a single cell. No problem except with the zip code. The zip cell has only 4 digits for any zip that starts with 0. So, I change it's format to be Special - Zip Code. That makes the cell itself show the beginning 0, but the merged cell still does not show the leading 0. Does anyone know how to get the leading 0 in the merged column? Thanks Evan

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

  • Recursively apply ACL permissions on Mac OS X (Server)?

    - by mralexgray
    For years I've used the strong-armed-duo of these two suckers... sudo chmod +a "localadmin allow read,write,append,execute,\ delete,readattr,writeattr,readextattr,writeextattr,\ readsecurity,writesecurity,chown" sudo chmod +a "localadmin allow list,search,add_file,add_subdirectory,\ delete_child,readattr,writeattr,readextattr,\ writeextattr,readsecurity,writesecurity,chown" to, for what I figured was a recursive, and all-encompassing, whole-volume-go-ahead for each and every privilege available (for a user, localadmin). Nice when I, localadmin, want to "do something" without a lot of whining about permissions, etc. The beauty is, this method obviates the necessity to change ownership / group membership, or executable bit on anything. But is it recursive? I am beginning to think, it's not. If so, how do I do THAT? And how can one check something like this? Adding this single-user to the ACL doesn't show up in the Finder, so… Alright, cheers.

    Read the article

  • Windows 7: enabling navigation of subfolders in pinned Start Menu folders

    - by AspNyc
    I'm just about to move from Windows XP to Windows 7, and I'm struggling with some of the interface changes. In XP, I was able to throw a folder intoC:\Documents and Settings\username\Start Menuand have it appear on the Start Menu, complete with the ability to navigate through subfolders. I've figured out how to pin a folder onto the Start Menu in Windows 7, which required a registry hack. However, I am unable to view the subfolders of the pinned folder without opening a new Windows Explorer window. Is there any way to replicate the old XP behavior I'm used to? I'd like to be only a single click away from these handful of application links and folders, since I use them all the time throughout the day.

    Read the article

  • Dedicated Servers: Is one better then two for LAMP pseudo HA setup? [closed]

    - by bikedorkseattle
    Possible Duplicate: How to find web hosting that meets my requirements? I know there are zillions of commentary about hosting out there, but I haven't read much about this. Our current well known host is having too many problems, the hardware we are on it subpar, and I'm ready to leave. A day of downtime can cost as much as our monthly hosting bill. A month of bad performance is just killing us right now, user and google wise. I'm wondering about running two dedicated boxes for LAMP, one running as the primary Nginx/Apache (proxy pass), and the other as the MySQL box. Running a single box scares the bejesus out of me because who knows how long it will take anyone to fix a raid card or whatever. The idea is to set this up using some sort of failover system using pacemaker and heartbeat. If one server goes down the other can take over for the other running both web and db. There are some good articles over at Linode about this. I have a few DBs that are 1GB+ and would like to load them into memory. Because of this, I'm shying away from a Linode HA setup because for the price I could do it with two dedicated like I described. Am I mad or an idiot? What are people out there doing for pseodu high availability good performance setups under $400/month? I'm a webmaster; I do a lot of things none of it that well :)

    Read the article

  • What are the Windows G: through Z: drives used for?

    - by Tom Wijsman
    In Windows you have a C: drive. The first things labeled beyond that seems to be extra stuff. So my DVD drive is D: and if you put in a USB stick it becomes F:. And then some people also have A: and B:. But then, what and where are G: through Z: drives for? Is it possible to connect so many things to a computer to make them all in use? Or more than them? Would it give a BSOD? Or would this slow down the system somehow? Or what would happen? What if I want to connect even more drives to the computer? Because with the hard drive limits it's more efficient to buy more drives than to buy a single drive with a lot of capacity. Is it possible to create drive letters like 0: through Z: or AA: through ZZ:?

    Read the article

  • Why are there so few Wireless N Dual Band adapter PCI cards, only USB adapters instead?

    - by daiphoenix
    There has been several Wireless N Dual Band routers/APs out in the market for quite some time now, and there are several Wireless N Dual Band USB adapters out there. But as for PCI/PCI-X card adapters, there seems to be only one (the Linksys WMP600N). Why is that? I find it very strange. Is it because the USB adapters are easier to install, and can be used on multiple computers? But if so, why isn't it the same case with single band (2.4 Ghz) wireless N adapters? Because for these ones there as many PCI card adapters as there are USB adapters. Also, can the USB adapters, despite the lack of external antenna, offer the same level of performance as a card with external antennas?

    Read the article

  • How to execute a command on multiple hosts using IPv6 only?

    - by math
    First of all there is pdsh which is essentially a parallel distributed shell which may execute commands on a list of given hosts. However, I find myself in an IPv6 only problem setting. It seems that pdsh is not able to use IPv6, as I am getting error messages: pdsh -w ^hostnames my_command pdsh@myhost: gethostbyname("foobar") failed I also tried to use IPv6 addresses only, which also didn't work. So how do you run a single shell script for administrative purpose (no SGE stuff, or similar) on a bunch of hosts that is IPv6 reachable only?

    Read the article

  • How to correctly handle redirect after site facelift

    - by Stefan
    I recently updated our site taking it from a multi-page site to a single page site. The problem now is that when the site is searched in say Google, it displays the site as well as the indexed pages. So if a user clicks say our "About" page, it takes them to our now outdated material. I am hoping to get some guidance on how to properly handle this. I figure the first step is to now setup a robots.txt on our new index page to tell the engines not to crawl beyond index.php. But in the meantime, how do I handle the fact that when searching our site on Google we may still have users who try to click on sub-page links? Should I simply setup redirects while waiting for the engines to update? And if so, do I need to setup redirects on each page using PHP or is this something I would take care of on our sites control panel? I am not very familiar with redirects... Any help is appreciated!

    Read the article

  • Is (Ubuntu) Linux file copying algorithm better than Windows 7?

    - by Sarath
    Windows Copying is a real mess ever since Windows Vista. Even Microsoft claims they've improved the performance, from a user perspective, it's not quite visible. Even with single file the copying window appears too much time for 'Calculating' and then finishing the copy(Even after 100% completion some times the dialog remains active). At the same time, I was backing up some files in Ubuntu Linux. I felt it's really fast. Might be a feeling caused by faster UI updates. I read an informative post from Jeff Atwood few years back on Windows File Copying. but what my specific questions are Is (Ubuntu) Linux file performance is better than Windows-7? Are both algorithms, Windows and Linux is making use of multiple threads and pipelining mechanism to improve the speed? If yes, which one is better?

    Read the article

  • Wordpress 3 mutli site install

    - by mike
    Hello, Trying to figure out if this is possible... My company has a cms product that was written in Java and we decided to use Wordpress to run blogs for our clients. Obviously, Wordpress does not run on tomcat(at least not by default) so we installed Pound(http://www.apsis.ch/pound/) on our server and have setup any Apache and Tomcat on different ports. When "/blog/" is requested, the request is directed to Apache. This works fine but we would like to use Wordpress multi site so that we can manage all the blogs from a single interface. We would also like the url for every site to be "/blog/" example: http://www.site1.com/blog/ http://www.site2.com/blog/ I'm thinking it would have to be done with apache??? Is it even possible? Thanks!

    Read the article

  • windows - batch moving files to another folder/directory

    - by jdamae
    I am getting an error message to the effect of unable to move files to a single file. I am not trying to do this. What I am trying to do is move files from one folder to another folder (staging) and then deleting the original folder. If you can show me a better way to do this since I am not doing this correctly. Thank you. Here is my .cmd file: Y: move "Y:\ABC_files\*.js" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.css" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.png" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.htm" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.gif" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC.htm "C:\Documents and Settings\user\Desktop\ABC_Stage\" rmdir "Y:\ABC_files" C:\"Program Files"\"App X"\App-IDE.exe -r ABC4.run

    Read the article

  • Choosing an open source license such that maximum value is added to a startup

    - by echo-flow
    There are many companies that produce open source software products, and many business models that these companies can use. I'm particularly interested in companies like 280 North, the company behind Objective-J and Cappucino frameworks. My understanding of this organization's business model is that they: worked to develop a tool which added significant value to developers, released the tool under an open source license, built a community around the tool (which was helped by the project's open source licensing), created interesting demos illustrating the project's value All of these things added value to the project, and the company that owned it. Finally, 280 North was sold to Motorola. My question has to do with the role of software licensing in this particular business model. 280 North licensed their software projects under the LGPL, which gave them some proprietary control over how the project could be used. I believe that the LGPL is what's known as a "weak copyleft" license, meaning that the project can be linked to, without the linking code also being licensed under the LGPL; but software derived directly from the project would need to be licensed under the LGPL. For web-oriented libraries in particular, weak copyleft, or non-copyleft licensing seems to be quite common; I can't think of a single example of a popular or well-known web-oriented library that is licensed under the GPL (or AGPL). The question then, is, how much value would a weak copyleft license like the LGPL add to a software venture like 280 North, versus a non-copyleft license, such as the BSD license or the Apache Software License? I'd really appreciate any insight anyone can offer into this, but I'd be most interested in answers that can cite other companies as case studies or examples.

    Read the article

< Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >