Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 950/1211 | < Previous Page | 946 947 948 949 950 951 952 953 954 955 956 957  | Next Page >

  • Belgrade Open Source Software Development Center

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. Here's video describing the NetBeans UML plugin: University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter). 

    Read the article

  • rdesktop + seamlessrdp + virtualbox slow on Windows

    - by Claudiu
    I'm trying to get VirtualBox seamless mode to work on all of my monitors at once. I was directed to this link, so I followed the instructions, using the windows port of rdesktop 1.6 found here, and using Xming as an X server. I eventually finally got it to work! However, it's really slow. While VirtualBox's regular seamless mode is as performant as if I was running apps on my host machine, with rdesktop it takes 1-2 seconds to register any user input. Dragging windows around is laggy and buggy (pieces of the desktop behind the window show through). I'm simply connecting to localhost, so network bandwidth/latency shouldn't be an issue... anyone have any idea why it's so slow and what I can do to make it more performant?

    Read the article

  • Wireless will not connect on an Asus U56? [closed]

    - by ernie
    I have an ASUS U56. I could connect to internet without delay or problems in Ubuntu 11.04. Now, running Ubuntu 11.10, I have not been able to connect. I can see the network and the computer tries to connect, but never makes the connection. I have been trying to fix this problem since the release date of 11.10. I had to install the older kernel 2.6 to get the wifi to work. Any other solutions? ernest@ernest-U56E:~$ ifconfig eth0 Link encap:Ethernet HWaddr 14:da:e9:25:6c:d0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:53 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 40:25:c2:3f:46:d4 inet addr:192.168.xx.xxx Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::4225:c2ff:fe3f:46d4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:44962 errors:0 dropped:0 overruns:0 frame:0 TX packets:32287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:35614976 (35.6 MB) TX bytes:5400816 (5.4 MB) ernest@ernest-U56E:~$

    Read the article

  • How can I use a computer as a router and send all client traffic through anonymous proxies?

    - by Terrapin
    Is there a way that I can setup a spare box as a router on my network, and route client traffic through a proxy in order to hide my location? Specifically, I would like internet traffic to/from my Roku Box to be routed via proxy, but there is no proxy support built in to the Roku. So I would like wire my Roku directly my computer's second NIC, and force all traffic through a proxy. What kind of software and hardware setup will I need? Also, which anonymous proxy service are best for this purpose? I'm not interesting in full anonymity or encryption. I simply want to mask my location while providing the best possible throughput.

    Read the article

  • Disk quota problem in Windows Server SBS 2003

    - by deddebme
    I have got a new job and the existing SBS 2003 domain setup is unsecure (i.e. everyone is a domain admin etc etc). There are lots of problem due to inexperienced "network admin", and I am trying to fix them one by one. There exist one issue which I found quite weird, that the "Quota" tab exists in the C:(NTFS) drive but not the D:(NTFS) drive. I played around with gpedit to enable disk quota (it was "not configured" before), but still I can't see that tab. Have you seen this problem before? How did you solve it?

    Read the article

  • lamp server permissions on development server

    - by user101289
    I run a LAMP server on a ubuntu laptop I use only for development. I am not greatly concerned with security, since the server is never accessible outside the local network, and it's turned off when I'm not using it. My question is what is the simplest and 'best' way to set permissions/users/groups so that when my myself user creates, edits or writes files in the webroot, I won't need to go through and CHMOD / CHOWN everything back to the www-data user? Should I add myself to the www-data group? Or chown the webroot to www-data:myself? Or is there a best practice for this situation so I don't have to keep re-setting the ownership of these files? Thanks

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • how do i automatically add a new route to the routing table?

    - by Robbie Mckennie
    I'm looking at linking two networks with a long range Ethernet bridge. I know I can connect my two networks with a router, but my problem is how will my computers know where to send packets if I don't add the route manually? I COULD add them manually, but it seems like a hassle. I have very very limited knowledge of RIP (I know it has something to do with routing), but I don't know how to use it. edit: My vision for the network would be the 2 networks (which are currently independent home networks), connected by a microwave Ethernet link. I assumed i'd need a router on one end of the bridge, to handle communication between the 2 networks.

    Read the article

  • Can't access LAN computers with SSH

    - by endolith
    I got a new Windows 7 machine, and was using VNC,SSH etc to connect to my Ubuntu machine, and it worked fine previously. Now it doesn't work if I use the machine's hostname or local IP, but if I use the DynDNS name, it works. I can also access it from my Android phone using the local hostname over SSH. If I try to connect with SSH to the hostname, it says "Host does not exist". VNC says "Failed to get server address". NX says "no address associated with name", and I don't see it in Windows' "Network" folder. I've rebooted everything. I've turned off Windows firewall. It was working fine a few days ago, but now it's not. How do I figure out what's blocking it?

    Read the article

  • How do I get detailed information about what happens during logon

    - by Funky Si
    Due to my IT department leaving I am now responsible for all our IT systems. I now have several problems to get my head around and fix. I run Active Directory on windows server 2003 and use group policy to apply settings etc. Recently we have had some windows 7 clients added to our network, these are having awful problems with our logon scripts and drive mappings. For the most part my XP clients are working without a problem. What I want to know is what is going on during logon, as running the logon scripts after I have logged on often works. Does anyone know of a way to get detailed log information of what is happening before and during logon. Thanks for your help and any suggestions you have for tracking down the source of these problems.

    Read the article

  • Best way to find the computer a user last logged on from?

    - by Garrett
    I am hoping that somewhere in Active Directory the "last logged on from [computer]" is written/stored, or there is a log I can parse out? The purpose of wanting to know the last PC logged on from is for offering remote support over the network - our users move around pretty infrequently, but I'd like to know that whatever I'm consulting was updating that morning (when they logged in, presumably) at minimum. I'm also considering login scripts that write the user and computer names to a known location I can reference, but some of our users don't like to logout for 15 days at a time. If there is an elegant solution that uses login scripts, definitely mention it - but if it happens to work for merely unlocking the station, that would be even better!

    Read the article

  • set up re-direct for lan clients to local t&c page

    - by tb2571989
    Hi, I'm trying to set up something on my network so that when users connect and try and use the internet they are re-directed to a locally-hosted terms and conditions and policy page. Once they click "accept" then they will be passed through to their homepage, otherwise if they decline then the window will close or show them an error message. I've spent a while looking into this and am wondering if it's possible to do witout having to setup/add to a firewall. Otheriwse let me know what my options are and I can pass it on. Many Thanks Tom

    Read the article

  • my sweet old VHS collection

    - by microspino
    Which is the best procedure and digital format to resurrect my old VHS library i a way I can see It on my LCD TV? I have a not so big collection 100 VHS I have plenty of storage I have a network media tank (A110 popcorn Hour but I can also purchase a new media center if needed) I have an old working VCR (but again I can pick a specific one new if you think It's better to save quality) The VHS cassette collection seems to have retained a good quality over the years. Of course I have some computer (either mac and pc) to do the process. Which software do I need/miss? Please give me some advice.

    Read the article

  • Why change net.inet.tcp.tcbhashsize in FreeBSD?

    - by sh-beta
    In virtually every FreeBSD network tuning document I can find: # /boot/loader.conf net.inet.tcp.tcbhashsize=4096 This is usually paired with some unhelpful statement like "TCP control-block hash table tuning" or "Set this to a reasonable value." man 4 tcp isn't much help either: tcbhashsize Size of the TCP control-block hash table (read-only). This may be tuned using the kernel option TCBHASHSIZE or by setting net.inet.tcp.tcbhashsize in the loader(8). The only document I can find that touches on this mysterious thing is the Protocol Control Block Lookup subsection beneath Transport Layer in Optimizing the FreeBSD IP and TCP Stack, but its description is more about potential bottlenecks in using it. It seems tied to matching new TCP segments to their listening sockets, but I'm not sure how. What exactly is the TCP Control Block used for? Why would you want to set its hash size to 4096 or any other particular number?

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • Vmware Workstation Development Server on Laptop

    - by Koobz
    I'm running an Ubuntu guest OS on my Windows 7 laptop. Currently, I have it set up for bridged networking. The guest is os is configured for a static ip of 192.168.1.115, which depending on the network I'm connected to, may not be available. When I want to view my development work I hit that up in my web browser. I'm really looking for the following scenario: 1) My guest OS ip address stays constant. 2) I can access my guest os even if I don't have an internet connection, or a lan/router. 3) I can share files with my guest/host. How does one accomplish this using VMware Workstation?

    Read the article

  • wget has a 4 second delay

    - by guisius
    Hello. I have tried to wget a page with windows/mac, and the response is instant while the linux vesion needs to wait for 4 seconds before it shows the response. I just hope this can be solved. More information added: in Ubuntu : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:21:17-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:21:22 (1.88 MB/s) - `test.txt' saved [17] while in Mac OS : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:22:33-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:22:33 (755 KB/s) - `test.txt' saved [17] in ubuntu it delays 4 seconds while windows and mac will not i believe it may related to some setting in the network config such as packet size , window frame , but i have no idea to set this PS: because the limit of the post not allow to post the url so i mark this as xxx

    Read the article

  • Putty freezes at random when logging into a remote machine in another continent

    - by vito
    I have to ssh to a remote machine in Europe from Asia every day for my work. But Putty freezes sometimes at totally random times and I have no choice but to close and re-open a new ssh session. It's frustrating especially when I'm editing something or executing a long running program. I know the question really doesn't have much details ('cause nothing seems to be wrong with the network at all). Has anyone experienced this sort of issue with Putty and had resolved it? Thanks for your time!

    Read the article

  • using nmap to guess remote OS and probe service details on a single port only

    - by WoJ
    I am looking at scanning with nmap a large network in order to identify the OS of devices (-O--osscan-limit) probe for details of a service on a single port (I would have used -sV for all open ports) The problem is that -sV will probe all the ports (which I do not want to do for performance reasons) and I cannot use -p to limit the ports to the one I am interested in as this impacts the OS fingerprinting. I could not find anything in the manual to limit the service probing. Thank you for any ideas (including other approaches outside of nmap, though I would prefer to stick to nmap)

    Read the article

  • Allow READ access to local folders in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Everytime I would uncheck the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • How Do I Do Alpha Transparency Properly In XNA 4.0?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • Routing and Remote Access Port Mapping not applied to localhost

    - by Computer Guru
    Hi, I've set up Routing and Remote Access (Windows Server 2003) to forward publicip:80 to a server on the private internal network, and that's working great. Incoming requests from the internet to port 80 are correctly forwarded to our internal web server and everything is fine. However, requests on the server itself are not being forwarded. That is, if I open a console window and type "telnet publicip 80" from the server on publicip, the request is not forwarded to the private server. I understand that in RRAS I've mapped port 80 on the public interface to the private server and that's why it's not working; but I don't know how to configure it so that requests from the local PC are also forwarded to the private server. I'd appreciate any help or feedback on the matter. Thanks!

    Read the article

  • Monitor Bonded Interface for Disconnection

    - by bradlis7
    I am trying to monitor for network failures on a machine, and one portion of that is to monitor interfaces that are intended to be active also be "RUNNING". An Ethernet port, such as eth0, will say "RUNNING" if it is physically connected to another device. The problem lies in the bonded interfaces, such as bond0. If all of the ethernet devices are disconnected, it still says that it is running, and it is still pingable. Is this by design, or is my system setup incorrectly? Does the miimon option have something to do with this?

    Read the article

  • OpenVZ multiple networks on CTs

    - by user6733
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

< Previous Page | 946 947 948 949 950 951 952 953 954 955 956 957  | Next Page >