Search Results

Search found 26374 results on 1055 pages for 'aaron solution evangelist'.

Page 12/1055 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Affordable Link Building Service - The Perfect Ranking Solution

    If you have website you want to do whatever you can to make sure that your website is the best that it can be so you take great pains to make sure that everything that's just right. So for this, SEO is very important. Among the fastest growing marketing strategies that can promote your website, is link building that's in your budget.

    Read the article

  • Is ther a Manifest solution? (8 replies)

    I have a 12 year old BC45 compiled 32 bit GUI utility that fails to load on XP and 2003 with a GPF. Worked find under 95, NT, 2000 and didn't expect anything to be different for other OSes. But it was reported this week and looking at our support logs, there were other reports last year on this as well. Testing it on XP and 2003 confirms this. I think it is related to either comctl32.dll, comdlg32...

    Read the article

  • designing solution to dynamically load class

    - by dot
    Background Information I have a web app that allows end users to connect to ssh-enabled devices and manipulate them. Right now, i only support one version of firmware. The logic is something like this: user clicks on a button to run some command on device. web application looks up the class name containing the correct ssh interface for the device, using the device's model name. (because the number of hardware models is so small, i have a list that's hardcoded in my web app) web app creates a new ssh object using the class loaded in step 2. ssh command is run and session closed. command results displayed on web page. This all works fine. Now the end user wants me to be able to support multiple versions of firmware. But the catch is, they don't want to have to document the firmware version anywhere becuase the amount of overhead this will create in maintaining the system database. In other words, I can't look up the firmware version based on the device. The good news is that it sounds like at most, I'll have to support two different versions of firmware per device. One option is to name the the classes like this: deviceX.1.php deviceX.2.php deviceY.1.php deviceY.2.php where "X" and "Y" represent the model names, and 1 and 2 represent the firmware versions. When a user runs a command, I will first try it with one of the class files, if it fails, i can try with the second. I think always try the newer version of firmware first... so let's say in the above example, I would load deviceX.2.php before deviceX.1.php. This will work, but it's not very efficient. But I can't think of another way around this. Any suggestions?

    Read the article

  • Solution Items in Visual Studio 2005/2008

    - by Muneeb
    Is it possible to add a class as a solution item and use it as a linked item in all the projects in the solution? Basically I was thinking of creating a class (which will inherit ConfigurationSection) and keeping it as the Solution Item. I wanted to add it as a linked item in all the projects in the solution, so that everyone can use it to access the configuration properties. (Refer to this tutorial for more details) Now the issue I am facing is that when I create a class in the solution item, it doesn't have any namespace. And it shows up in intellisense, inside the projects but once I create an object of the solution item class, the object doesn't show up in intellisense. Any ideas why?

    Read the article

  • Solution Output Directory

    - by L.E.O
    The project that I'm currently working on is being developed by multiple teams where each team is responsible for different part of the project. They all have set up their own C# projects and solutions with configuration settings specific to their own needs. However, now we need to create another, global solution, which will combine and build all projects into the same output directory. The problem that I have encountered though, is that I have found only one way to make all projects build into the same output directory - I need to modify configurations for all of them. That is what we would like to avoid. We would prefer that all these projects had no knowledge about this "global" solution. Each team must retain possibility to work just with their own sub-solution. One possible workaround is to create a special configuration for all projects just for this "global" solution, but that could create extra problems since now you have to constantly sync this configuration settings with the regular one, used by that specific team. Last thing we want to do is to spend hours trying to figure out why something doesn't work when building under global solution just because of some check box that developers have checked in their configuration, but forgot to do so in the global configuration. So, to simplify, we need some sort of output directory setting or post build event that would only be present when building from that global, all-inclusive solution. Is there any way to achieve this without changing something in projects configurations? Update 1: Some extra details I guess I need to mention: We need this global solution to be as close as possible to what the end user gets when he installs our application, since we intend to use it for debugging of the entire application when we need to figure out which part of the application isn't working before sending this bug to the team working on that part. This means that when building under global solution the output directory hierarchy should be the same as it would be in Program Files after installation, so that if, for example, we have Program Files/MyApplication/Addins folder which contains all the addins developed by different teams, we need the global solution to copy the binaries from addins projects and place them in the output directory accordingly. The thing is, the team developing an addin doest necessary know that it is an addin and that it should be placed in that folder, so they cannot change their relative output directory to be build/bin/Debug/Addins.

    Read the article

  • Project Reference added to one of the projects in the same solution appears broken in another solution

    - by CSharpLearner
    I have couple of solutions. In the first solution I have many projects. One of the project named 'A' has a project reference of another project 'B' of the same solution. In second solution, the project 'A' is added but not the project 'B'. Both the solutions build successfully. However, in second solution, reference of B added in the project A, appears broken. Why? Now, in first solution, instead of adding Project reference of B into A, i simply add a 'file reference' of B's DLL (which is copied at the common output directory created for all the projects) into A. Now the reference appears broken in both the solution even though both the solutions are built successfully. May I know what should I do when I have such a scenario?

    Read the article

  • How can I clone a .NET solution?

    - by tobinharris
    Starting new .NET projects always involves a bit of work. You have to create the solution, add projects for different tiers (Domain, DAL, Web, Test), set up references, solution structure, copy javascript files, css templates and master pages etc etc. What I'd like is an easy way of cloning any given solution. If you use copy/paste, the problem is that you need to then go through renaming namespaces, assembly names, solution names, GUIDs etc. Is there a way of automating this? Something like this would be great: solutionclone.exe --solution=c:\code\abc\template.sln --to=c:\code\xyz --newname=MySolution I'm aware that Visual Studio has project templates, but I've not seen solution templates. Ideas welcome, thanks in advance folks!

    Read the article

  • Simple vs Complex (but performance efficient) solution - which one to choose and when?

    - by ManojGumber
    I have been programming for a couple of years and have often found myself at a dilemma. There are two solutions - one is simple one i.e. simple approach, easier to understand and maintain. It involves some redundancy, some extra work (extra IO, extra processing) and therefore is not the most optimal solution. but other uses a complex approach,difficult to implement, often involving interaction between lot of modules and is a performance efficient solution. Which solution should I strive for when I do not have hard performance SLA to meet and even the simple solution can meet the performance SLA? I have felt disdain among my fellow developers for simple solution. Is it good practice to come up with most optimal complex solution if your performance SLA can be met by a simple solution?

    Read the article

  • Looking for ballpark pricing on an affordable a Cisco VOIP solution for our office

    - by guytech
    We have about 8 incoming PSTN lines that are currently on an old and antiquated Nortel Meridian ICS system. This system has been giving us some grief. We're looking for a new VOIP solution. I've been looking at a Cisco solution and it does seem pricey but I'm sure effective. Unfortunately, we probably can't afford a Cisco Unified Communications 520 which seems to be the ideal solution. We have about 15 people who need an extension and voicemail. We really don't have any need for a fancy system just an auto attendant of some sort when people call us. It looks like we'll have to get an older router and an addon card for what we're looking for to get best value pricing. However, I don't know a a lot about Cisco voice products so I'm a bit lost as to what to get. The only thing I am sure on is the pricing on VOIP phones which we expect to be about ~$100-200. However, I'm not sure what pieces of VOIP infrastructure to get. Any advice? I am familiar with Asterisk but right now I'm looking on pricing concerning a Cisco solution.

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Best Solution for Load Balancing NFS File Access?

    - by DairyKnight
    I'm trying to find an optimum solution for accessing the NFS file share in my company. We have a central file server in North America and has 30GB~50GB of updated data everyday. And it's very slow for our Europe and Asia branches to access directly. Therefore, I'm trying to setup two replicate servers in those continents. I'm currently using rsync, but wonder if there exists a better solution acts more like a distributed RAID, which allows the user to transparently access the file whether synced or not. And user request will be dispatched to remote server if the file is not yet synced. I'm now looking into DRBD, but it seems not to have the functionality of auto-dispatching requests. Does anyone know if there's a better solution?

    Read the article

  • Drop in solution for logging to DB

    - by Jake
    I'm considering setting up our servers to log to a Mongo Database rather than log files. Logs will then be all on one server, queryable, and overall easier to manage. I'd love to find a solution that will allow all the different processes I have running to write to DB rather than files (or perhaps something to read the files, pass the logs on and truncate the files). I don't want to have to find a different solution for every process if I can avoid it. So, does anyone know of an existing solution to this problem?

    Read the article

  • Suggest me solution to track the change in test DB and replicate in Another DB

    - by Pranav
    Suggest me solution to track the change in test DB and replicate in Another DB... My Client need a script or any solution, if he has two Database, One Test DB in which he tests his data on test portal and if he find it appropriate he can use those changes to be done in main DB to display on Live site.. Fior this he needs the solution to record or track all updation/deletion/insertion, so that he can do the same in main DB if found appropriate, NOTE: we have only one server, no separate server, hence binary log replication doesnt seems to be working for my case..

    Read the article

  • Online backup solution

    - by Petah
    I am looking for a backup solution to backup all my data (about 3-4TB). I have look at many services out there, such as: http://www.backblaze.com/ http://www.crashplan.com/ Those services look very good, and a reasonable price. But I am worried about them because of incidents like this: http://jeffreydonenfeld.com/blog/2011/12/crashplan-online-backup-lost-my-entire-backup-archive/ I am wondering if there is any online back solution that offers a service level agreement (SLA) with compensation for data loss at a reasonable price (under $30 per month). Or is there a good solution that offers a high enough level of redundancy to mitigate the risk? Required: Offsite backup to prevent data loss in terms of fire/theft. Redundancy to protect the backup from corruption. A reasonable cost (< $30 per month). A SLA in case the service provider faults on its agreements.

    Read the article

  • Password manager solution: Symbian based phone and a Linux machine (Windows is not important, but wo

    - by Kent
    Hi, I currently use KeePassX to manage my passwords on my Linux (Xubuntu) machine. It's nice to have all the passwords encrypted, but sometimes I'd like to be able to tell a password when I'm on the run. Therefore I'm looking for a solution which I can synchronize with my phone. I have a Nokia N82 which is a Symbian OS v9.2 based phone for the S60 3rd Edition platform with Feature Pack 1. I like an open source solution if it's possible. In case it isn't I wouldn't mind paying for a good solution. If Windows may be added to the synchronization mix it's nice, but it's absolutely not a primary requirement (I don't even have any computer running Windows).

    Read the article

  • Small business VPN solution

    - by Crash893
    I've been looking for a while but I'd like to to implement a vpn solution for anywhere from 1-5 employees at a time (possibly 10 in a year or so) edit: Basically I would like outside users to fire up a client or open a web page and be able to access things inside the company network (share drives / printers/ webapps /etc) I've looked at Astaro Gateway but im not sure if that's the right tool for the job. I know "best" is a subjective term so i would like to break it into to different suggestions 1) what is the cheapest solution given the criteria above 2) what solution will result in the least amount of headaches from the point of view of maintenance and learning curve.

    Read the article

  • Exchange Failover Solution

    - by Dan
    I've been given the task to come up with an exchange solution that will support 200 users total throughout 4 states. 1GB per user. It needs to have a failover solution,The failover must reside in another location. There is an mpls that connects the locations. I am hoping to get recomendations on hardware, software setups. I recently worked with some big name reps and they steered me in the wrong direction and now I'm a few days away from my proposal date with bogus quotes and scrambling for a solution. I used to manage a standalone 2003 exchange server for years and am at a loss now with figuring out a clustering/failover...Any help would be greatly appreciated. thank you

    Read the article

  • List all documents (webparts) and sites using a certain solution in sharepoint 2007

    - by tnolan
    I would like to uninstall a Sharepoint application template (GroupBoard Workspace to be exact) but I want to make sure nothing currently relies on it. I don't see any functions within stsadm that will tell me this information and I have even tried SPM which would work, but with such a huge site it's tedious to go through every single web and page to see which features are in use. Is there a way (probably with SQL using the id from stsadm -o enumsolutions) to list everything that relies on a template within a given solution, including webparts on custom pages? If this is not possible, what is the best way to check dependencies prior to uninstalling a solution (especially since GBW is not the only one on my axe list.) Note: I know that stsadm -o deletesolution will stop me from removing something that is in use, but I want to see all of the things that are using a given solution.

    Read the article

  • Putting solution build output in a different directory !!

    - by Rajesh
    Hi all, I have an issue in building my solution (Hardcopy.sln) .This solution consists of many other modules & each module is directing their output to the bin/debug/ folder. during the whole solution build . i want to redirect the output of each module to a different location .how to do the same. i am using the MSbuild utility to build the solution in my nant scripts . i want to do it using Msbuild utility in the Nant is there any way out: Thanks Rajesh

    Read the article

  • Please recommend a free stealth remote access solution for internal network

    - by Nathaniel_613
    Hi, I need to have ability to stealthfully access, view, and control a few dozen PC's on my company's network. I would need a control panel window, so I can instantly connect to any of the users. Please recommend a secure solution, that will not make us vulnerable to viruses and hackers. All of the PC's have dynamic IP addresses, so I may have to use the DNS name or have a solution that uses web. Thank you very much, Nathaniel.

    Read the article

  • open source VDI solution [closed]

    - by sysconfig
    looking to build a 10 node to eventually 50 node VDI solution. the only OS on the desktop will be ubuntu ( or some other linux ) looking for easy setup administration, and remote administration etc. will probably just use diskless PC as clients for now, but would want a solution that can accommodate thin-clients as well, and maybe there its just XDMCP from the server. must be completely open source ( no VMware ) thoughts ?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >