Search Results

Search found 19446 results on 778 pages for 'network printer'.

Page 583/778 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Allow READ access to local folders in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Everytime I would uncheck the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • set up re-direct for lan clients to local t&c page

    - by tb2571989
    Hi, I'm trying to set up something on my network so that when users connect and try and use the internet they are re-directed to a locally-hosted terms and conditions and policy page. Once they click "accept" then they will be passed through to their homepage, otherwise if they decline then the window will close or show them an error message. I've spent a while looking into this and am wondering if it's possible to do witout having to setup/add to a firewall. Otheriwse let me know what my options are and I can pass it on. Many Thanks Tom

    Read the article

  • backup and file server for 50+ TB of data

    - by a-bomb
    our office wants to build a new server to handle our data, over the last 10 years our data was stored on CDs, DVDs, HDDs but now they want all of it in one place that is attached to the network for everybody in the office to access it. the data is 20TB new data and the rest is old, the important now is to store these 20tb and gradually store the other 30tb over time. so what is the best solution to do ? we thought of getting an hp server and connect it to an external enclosure that either had tape drives or HDDs (we haven;t decided yet) or to get a NAS server and connect it to the hp server. what should we do because this is new for us ...

    Read the article

  • how do i automatically add a new route to the routing table?

    - by Robbie Mckennie
    I'm looking at linking two networks with a long range Ethernet bridge. I know I can connect my two networks with a router, but my problem is how will my computers know where to send packets if I don't add the route manually? I COULD add them manually, but it seems like a hassle. I have very very limited knowledge of RIP (I know it has something to do with routing), but I don't know how to use it. edit: My vision for the network would be the 2 networks (which are currently independent home networks), connected by a microwave Ethernet link. I assumed i'd need a router on one end of the bridge, to handle communication between the 2 networks.

    Read the article

  • Configure SMTP server windows

    - by Jake
    I am configuring a local network and for some reason I can't get server to send an email. I already install the SMTP server and configured using this tutorial http://www.itsolutionskb.com/2008/11/installing-and-configuring-windows-server-2008-smtp-server/ but when I try to send an email using code, the email gets pickedup from mailroot/pickup and dropped in mailroot/queue and stays in queue forever, it never goes anywhere, I even tried dropping a basic mail.txt file with this in it: to:[email protected] from:[email protected] subject:This is a test. this is a test. still the same thing happens. Is the smtp server not configured right, is their something else I am missing, because this is my first time setting up an smtp server

    Read the article

  • wget has a 4 second delay

    - by guisius
    Hello. I have tried to wget a page with windows/mac, and the response is instant while the linux vesion needs to wait for 4 seconds before it shows the response. I just hope this can be solved. More information added: in Ubuntu : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:21:17-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:21:22 (1.88 MB/s) - `test.txt' saved [17] while in Mac OS : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:22:33-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:22:33 (755 KB/s) - `test.txt' saved [17] in ubuntu it delays 4 seconds while windows and mac will not i believe it may related to some setting in the network config such as packet size , window frame , but i have no idea to set this PS: because the limit of the post not allow to post the url so i mark this as xxx

    Read the article

  • using nmap to guess remote OS and probe service details on a single port only

    - by WoJ
    I am looking at scanning with nmap a large network in order to identify the OS of devices (-O--osscan-limit) probe for details of a service on a single port (I would have used -sV for all open ports) The problem is that -sV will probe all the ports (which I do not want to do for performance reasons) and I cannot use -p to limit the ports to the one I am interested in as this impacts the OS fingerprinting. I could not find anything in the manual to limit the service probing. Thank you for any ideas (including other approaches outside of nmap, though I would prefer to stick to nmap)

    Read the article

  • Nagios: Which services should I monitor on different roles of servers?

    - by Itai Ganot
    I've started working in a new workplace and my first task is to build a Nagios server and configure it to monitor the servers in the network. Since I'm starting from scratch I wanted to hear from you, experienced users, which checks should I configure for each role? For example, there are some basic checks which I run on each Linux machine I monitor: SSH, Ping, Load, Current Users, Swap Usage, etc... Now my question is, which specific checks should I run for a DataBase server, for a Networking Switch, for Httpd servers? (I currently monitor how many httpd processes are running) Thanks in advance

    Read the article

  • Disk quota problem in Windows Server SBS 2003

    - by deddebme
    I have got a new job and the existing SBS 2003 domain setup is unsecure (i.e. everyone is a domain admin etc etc). There are lots of problem due to inexperienced "network admin", and I am trying to fix them one by one. There exist one issue which I found quite weird, that the "Quota" tab exists in the C:(NTFS) drive but not the D:(NTFS) drive. I played around with gpedit to enable disk quota (it was "not configured" before), but still I can't see that tab. Have you seen this problem before? How did you solve it?

    Read the article

  • which vista services can be disabled with impunity?

    - by GwenKillerby
    I use Vista on a HP pavilion DV2 laptop. When I look through all the services my laptop starts, it really seems there's way too much of it. I multi boot with XP and 7. Both startup in 40 seconds. Vista takes four minutes. Is there some software that can determine which services I don't need? On 7, there's no propietary HP stuff at all, yet it seems to run fine. Because all these service, there's a LOT of them and some just sit there doing nothing, monitoring for updates I don't really need or want or need to know about the second they're available. my laptop is the only computer i use at home, there's no home network, aside from the modem-router, which is cabled, not wifi. Take for instance Parental controls, and stuff for people with bad eyesight, Tablet PC. I really never use any of that stuff. Hope this question is specific enough. thanks, Gwen.

    Read the article

  • Friday Tips #6, Part 2

    - by Chris Kawalek
    Here is a question about updating Oracle VM: Question: How can I perform Oracle VM 3 server updates from Oracle VM Manager? Answer by Gregory King, Principal Best Practices Consultant, Oracle VM Product Management: Server Update Manager is a built-in feature of the Oracle VM Manager. Basically, Server Update Manager automatically configures YUM updates on all the Oracle VM Servers, pointing each to our Unbreakable Linux Network (ULN) update channel for Oracle VM. The servers periodically check with our Oracle YUM repository and notify the Oracle VM Manager that an update is available for each server. Actual server updates must be triggered by the Oracle VM administrator – they are not executed automatically. At this point, you can use the Oracle VM Manager to put a server into maintenance mode which live migrates all the running Oracle VM Guests to other Oracle VM Servers in the server pool. Once all the Oracle VM Guests have been migrated, the Oracle VM administrator can trigger the update on the server. The entire process is documented in the Installation and Upgrade Guide of Oracle VM Documentation so I won’t spend time detailing the steps. However, configuring the Server Update Manager is exceedingly simple. Simply navigate to the Tools and Resources tab in the Oracle VM Manager, select the link for Server Update Manager and ensure the following values are added to the text boxes as shown in the illustration below: YUM Base URL: http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64 YUM GPG Key: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Every server in the pool will be automatically configured for YUM updates once you choose the Apply button. Many thanks to Greg and Rick for providing the answers to this week's questions. If you want to ask us something, hit up Twitter and use hashtag #AskOracleVirtualization. See you next week! -Chris 

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • lamp server permissions on development server

    - by user101289
    I run a LAMP server on a ubuntu laptop I use only for development. I am not greatly concerned with security, since the server is never accessible outside the local network, and it's turned off when I'm not using it. My question is what is the simplest and 'best' way to set permissions/users/groups so that when my myself user creates, edits or writes files in the webroot, I won't need to go through and CHMOD / CHOWN everything back to the www-data user? Should I add myself to the www-data group? Or chown the webroot to www-data:myself? Or is there a best practice for this situation so I don't have to keep re-setting the ownership of these files? Thanks

    Read the article

  • OpenVZ multiple networks on CTs

    - by user6733
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

  • Wireless will not connect on an Asus U56? [closed]

    - by ernie
    I have an ASUS U56. I could connect to internet without delay or problems in Ubuntu 11.04. Now, running Ubuntu 11.10, I have not been able to connect. I can see the network and the computer tries to connect, but never makes the connection. I have been trying to fix this problem since the release date of 11.10. I had to install the older kernel 2.6 to get the wifi to work. Any other solutions? ernest@ernest-U56E:~$ ifconfig eth0 Link encap:Ethernet HWaddr 14:da:e9:25:6c:d0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:53 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 40:25:c2:3f:46:d4 inet addr:192.168.xx.xxx Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::4225:c2ff:fe3f:46d4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:44962 errors:0 dropped:0 overruns:0 frame:0 TX packets:32287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:35614976 (35.6 MB) TX bytes:5400816 (5.4 MB) ernest@ernest-U56E:~$

    Read the article

  • script Disk Management configuration

    - by Joseph
    I have 10 workstations with large monitors that have USB slots and several card readers built in. The card readers cannot be disabled and will map to drive letters when I image the computers. I go into Disk Management and delete the drive mappings and add mappings to a single folder in C:\ with a folder for each slot. I have to do this because of scripts that run that are expecting specific letter drive mappings to network resources. Is there a way to script the deleting and adding of drive mappings instead of having to use the Disk Management GUI manually on each workstation? The workstations are running XP Professional.

    Read the article

  • How can I use a computer as a router and send all client traffic through anonymous proxies?

    - by Terrapin
    Is there a way that I can setup a spare box as a router on my network, and route client traffic through a proxy in order to hide my location? Specifically, I would like internet traffic to/from my Roku Box to be routed via proxy, but there is no proxy support built in to the Roku. So I would like wire my Roku directly my computer's second NIC, and force all traffic through a proxy. What kind of software and hardware setup will I need? Also, which anonymous proxy service are best for this purpose? I'm not interesting in full anonymity or encryption. I simply want to mask my location while providing the best possible throughput.

    Read the article

  • The Importance of Fully Specifying a Problem

    - by Alan
    I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung. Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it's threads spinning on busy wait type locks). This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced. This prompted the question back to the customer: What exactly were you seeing that made you believe that the system was hung? It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing "root" and hitting return, the console was no longer responsive. This description puts a whole new light on the "hang". You immediately start thinking "name services". Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network. Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open. My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server. Moral When you log a service ticket for a "system hang", it's great to get the forced crashdump first up, but it's even better to get a description of what you observed to make to believe that the system was hung.

    Read the article

  • Best way to find the computer a user last logged on from?

    - by Garrett
    I am hoping that somewhere in Active Directory the "last logged on from [computer]" is written/stored, or there is a log I can parse out? The purpose of wanting to know the last PC logged on from is for offering remote support over the network - our users move around pretty infrequently, but I'd like to know that whatever I'm consulting was updating that morning (when they logged in, presumably) at minimum. I'm also considering login scripts that write the user and computer names to a known location I can reference, but some of our users don't like to logout for 15 days at a time. If there is an elegant solution that uses login scripts, definitely mention it - but if it happens to work for merely unlocking the station, that would be even better!

    Read the article

  • How to configure HA iSCSI for Solaris 10

    - by Noah
    BACKGROUND: We have a StarWind NAS that we are currently using for High Availability storage with our Windows network. Starwind has mirrored drives and multiple ip paths, that the Windows Server combines into one HA disk store. QUESTION: How do I accomplish the same thing under Solaris 10? I've looked at ZFS but to document seems to indicate that ZFS wants to do its own raid/mirroring. I can also attach via iSCSI from Solaris and am presented with both drives being served by the Starwind NS. So, how do I configure solaris so that disk M1 and M2 are considered as a single fault tolerant drive?

    Read the article

  • Providing high availability and failover using MySQL on EC2

    - by crb
    I would like to have a highly-available MySQL system, with automatic failover, running on Amazon EC2 instances. The standard approach to solving this is problem Heartbeat + DRBD, but I've found a lot of posts suggesting DRBD doesn't work on EC2, though none saying exactly why. Obviously, a serial heartbeat or distinct network is out of the question in the virtualised environment. It would also be good to have the different servers be in different availability zones, but we're getting into a much harder problem there. What are peoples' opinion on having a high uptime solution in "the cloud"? Note: This question was asked before RDS with multi-AZ was announced, which is the nice automatic answer for today's modern IT professional. :)

    Read the article

  • remote symbolic link / junction

    - by Blueberry
    Might be a pretty obvious one but have had some trouble finding solid answers. I have a directory on a windows network share containing different versions of an application. I would like to have a link to one of these called 'current', which will be a symbolic link to the directory sitting beside all the other versions and pointing to one of these. Creating this link seems to be more of an issue than I would have thought. Looks like symlink only shows the link on the same machine as where it was created (which is not going to work for obvious reasons) and junction needs to be run on the server which is practically impossible due to various restrictions. What would be the best way to go about this? Would I just need to copy the files twice or can I have a symbolic link which can be created and accessed remotely?

    Read the article

  • Routing tables don't show ppp0 after 12.04 kernel upgrade to 3.5.0: Haier CE682 modem configuration

    - by ubunsteve
    I'm trying to get my Haier CE682 EVDO modem, model number 201e:1022 to work in ubuntu 12.04 kernel 3.5.0-030500-generic #201207211835 . I had it working in a previous 12.04 kernel, using compat-wireless and these instructions http://zulkhamsyahmh.blogspot.com/2012/05/install-smartfren-haier-ce682-on-ubuntu.html, and to get it working had to edit the routing tables so that there was a ppp0 showing up, as suggested at http://www.linuxquestions.org/questions/slackware-14/wvdial-is-connecting-but-im-unable-to-do-anything-714861/ Network manager doesn't work with this modem, so I use either wvdial or gpppon to connect to it, both which work (after I run the command sudo modprobe usbserial vendor=0x201e product=0x1022 ) This is the output of when I connect with gpppon to the modem: Using interface ppp0 Connect: ppp0 <-- /dev/ttyUSB0 sent [LCP ConfReq id=0x1 ] rcvd [LCP ConfAck id=0x1 ] rcvd [LCP ConfReq id=0x2 ] sent [LCP ConfAck id=0x2 ] sent [LCP EchoReq id=0x0 magic=0x819c86db] rcvd [CHAP Challenge id=0x1 <1ac8f12799e953967a3cc222c9254690, name = ""] sent [CHAP Response id=0x1 <6f12a903dc40915ca2761c17b87f8fbd, name = "smart"] rcvd [LCP EchoRep id=0x0 magic=0x0] rcvd [CHAP Success id=0x1 ""] CHAP authentication succeeded CHAP authentication succeeded sent [CCP ConfReq id=0x1 ] sent [IPCP ConfReq id=0x1 ] rcvd [IPCP ConfReq id=0x1 ] sent [IPCP ConfAck id=0x1 ] rcvd [CCP ConfReq id=0x1] sent [CCP ConfAck id=0x1] rcvd [CCP ConfRej id=0x1 ] sent [CCP ConfReq id=0x2] rcvd [IPCP ConfRej id=0x1 ] sent [IPCP ConfReq id=0x2 ] rcvd [CCP ConfAck id=0x2] rcvd [IPCP ConfNak id=0x2 ] sent [IPCP ConfReq id=0x3 ] rcvd [IPCP ConfAck id=0x3 ] not replacing existing default route via 192.168.3.1 local IP address 10.191.248.154 remote IP address 10.17.95.25 primary DNS address 10.17.3.244 secondary DNS address 10.17.3.245 as you can see there is a problem with "not replacing existing default route via 192.168.3.1" This it the out put of route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.3.1 0.0.0.0 UG 0 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.3.0 * 255.255.255.0 U 2 0 0 wlan0 I had tried these commands, which had previously worked in the earlier kernel: route del default route add default ppp0 but that broke my wireless internet connection. I then added the default routing as shown above with sudo route add default gw 192.168.3.1 wlan0 So it seems I need to add or change the routing to show a ppp0 connection, but I don't know how to do that.

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >