Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 968/1211 | < Previous Page | 964 965 966 967 968 969 970 971 972 973 974 975  | Next Page >

  • Cannot connect Windows 7 PCs to a Windows 2011 SBS domain

    - by Alexander Miles
    I can connect XP machines just fine to our new domain, however, I get the following error when I try to bind any Windows 7 box on our network to the 2011 SBS domain: An attempt to resolve the DNS name of a DC in the domain being joined has failed. Please verify this client is configured to reach a DNS server that can resolve DNS names in the target domain. I am wondering if part of the problem might be related to the fact we still have our Win2k DC active (and running DNS) until this server is set up for good? Any help on this would be much appreciated.

    Read the article

  • Two routers, one off-site, same ISP-assigned static IP. A recipe for conflict?

    - by boost
    This is the situation I've inherited: There are two routers, one off-site. Both are connected to the ISP. The ISP assigns both of them the same static IP (or so it seems). Presumably, the network problems we're having are related to the idea that you can't have two instances of the same IP. So we rang up the folk off-site and told them to turn off the router. Now everything's working okay here. How do I get around this? Get another static IP? Figure out how to get the router to ask for a dynamic IP (as we're not using the static IP for anything)?

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Deactivate SYN flooding mechanism

    - by mlaug
    I am running a server that is running a service on port 59380. There are more than 1000 machines out there connecting to that service. Once I need to restart the service all those machines are connecting at the same time. That made some trouble as I have seen that log entry in kern.log TCP: Possible SYN flooding on port 59380. *Sending cookies*. Check SNMP counters. So I changed sysctl net.ipv4.tcp_syncookies to 0 because the endpoints to not handle tcp syn cookies correctly. Finally I restarted my network to get the changes in production Next time I had to restart the service, the following message was logged TCP: Possible SYN flooding on port 59380. *Dropping request*. Check SNMP counters. How can I prevent the system for doing such actions? All necessary counter measures are done by iptables...

    Read the article

  • vmware player - ubuntu can resolve hostname but ping fails

    - by recursive_acronym
    Using VMware players on Windows 7 with a Ubuntu 10.04 guest. When I ping it resolves the ip address but the ping fails. Hopefully this is a local issue as I don't have access to any of the network equipment (routers, etc). vmware tools is installed. Is there any other information I can provide to help resolve this? eth0 Link encap:Ethernet HWaddr 00:0c:29:83:4f:c0 inet addr:192.168.163.129 Bcast:192.168.163.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe83:4fc0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:179 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:50006 (50.0 KB) TX bytes:16701 (16.7 KB) Interrupt:19 Base address:0x2024 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 B) TX bytes:480 (480.0 B)

    Read the article

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

  • Have two exchange servers to communicate together

    - by Data-Base
    We have Exchange Server 2007 using our domain ddd.com. We created an isolated network with a firewall/gateway and installed a DC and Exchange Server 2010 using a demo/test domain (ddd.loc). We opened all the needed ports in the firewall (10.10.2.88) to the Exchange Server 2010. In our main Domain Controller (10.10.2.3) we defined the domain ddd.loc with IP 10.10.2.88 (firewalls). We also we defined MX records to the same IP (10.10.2.88) We did that so when we send email from my email [email protected] it will go to the Exchange Server 2010. Anyways, all the pings test from to any servers are OK. But we are not able to send or receive emails. Between these Exchange Servers we can not send any email from the 2010 to any email in general (emails are pending). Also, in Exchange 2007 we are getting error #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ##

    Read the article

  • VirtualPC/XPMode... trying to let a VM access pages served using IIS on the host machine

    - by John
    My host PC is running IIS7.5 under Windows7. I have a VM running XP to let me use IE6, but I've no idea what network settings on the VM/host are needed so the VM can access pages on the host. I thought if the host was 192.168.1.1, then from the VM I'd simply do http://192.168.1.1/... if I do this on the host it works but the VM can't see it. I'm assuming there are some shortcuts here rather than manually having to set up loads of permissions, e.g a shortcut way of letting the VM access the host maybe?

    Read the article

  • ftp server over internet using different port

    - by ???? ????
    I want to make my ftp server over the internet i made it on Debian linux computer and i changed the port of it to 201 my local ip is 192.168.1.3 so i can access it from any computer on my network through ftp :// mylocalip:201 it appear to me the login page i login with my linux user and can see the files on my ftp server to make it public i make port mapping on my router for port 201 when i try ftp :// mypublicip:201 it give me the login page and when i entered the login data it is loading infinity without open my ftp server files when i made it over default port 21 it works fine. can any one tell me what is the problem here?

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • SQL DB design to support user feeds (in application like facebook)

    - by Yoav
    I have a social network server with a MySql DB. I want to show the users feeds like done in Facebook. Example - UserX now Friend with userY, userX did like on postX etc. Currently I have table: C1 : UserId C2 : LogType (now friend, did like etc) C3 : ObjectId (Can be userId or postId) - set depending on the LogType. Currently to get all related logs to show to the user I do the following queries: 1. Get All user Friends userIds 2. Query all rows which C1 is in userIds (I query completed) 3. Scan the DB and see - if LogType equals DidLike, check if post's OwnerId is the userId - if yes add it to logs. And so on. Obvious this is not efficient at all. I am looking for a better way. I thought I had in mind: Create a new table (in addition to the Log table) C1 : UserId C2 : LogId (from Log table) C3 : UserID of the one who did the action When querying logs - look in the table and get related Logs (by LogId) from LogTable. Updating the table: Whenever user doing action that should be in the log: 1. Add the Log entry to LogTable. 2. Scan the DB and see which users are interested with the Log (Who my friends are, Who is the owner of the post) and add related entries to the new table. (must be done in BG). 3. If user UNFRIEND another user - then look in the logs for all rows where C3 == UNFRIENDED user id and delete them. Any opinions? Other suggestions?

    Read the article

  • Innovation for Retailers

    - by David Dorf
    One of my main objectives for this blog is to point out emerging technologies and how they might apply to the retail industry.  But ideas are just the beginning; retailers either have to rely on vendors or have their own lab to explore these ideas and see which ones work.  (A healthy dose of both is probably the best solution.)  The Nordstrom Innovation Lab is a fine example of dedicating resources to cultivate ideas and test prototypes. The video below, from 2011, is a case study in which the team builds an iPad app that helps customers purchase sunglasses in the store.  Customers take pictures of themselves wearing different sunglasses, then can do side-by-side comparisons. There are a few interesting take-aways from their process.  First, they are working in the store alongside employees and customers.  There's no concept of documenting all the requirements then building the product.  Instead, they work closely with those that will be using the app in order to fully understand what's needed.  When they find an issue, they change the software onsite and try again.  This iterative prototyping ensures their product hits the mark.  Feels like Extreme Programming if you recall that movement. Second, they have time-boxed the project to one week.  Either it works or it doesn't, and either way they've only expended a week's worth of resources.  Innovation always entails failure, and those that succeed are often good at detecting failure quickly then adjusting.  Fail fast and fail often. Third, its not always about technology.  I was impressed they used paper designs to walk through user stories and help understand the needs of the customer.  Pen and paper is the innovator's most powerful tool. Our Retail Applied Research (RAR) team uses some of these concepts in our development process.  (Calling it a process is probably overkill.)  We try to give life to concepts quickly so the rest of organization can help us decide if we're heading the right direction.  It takes many failures before finding a successful product.

    Read the article

  • Ubuntu from console/command-line/shell

    - by Xolve
    Earlies linux distros though required lot of manual work they were quite good to use from commandline. If the X-server didn't start or you just want a shell to work they all supported. Network was configured by init; sound was up and ready; new devices inserted would be configured and their configureation was placed in fstab. Also there were small scripts I found on many distros which on X used windows while on console they switched to ncurses. But now this all needs GUI with a desktop manager (KDE, GNOME) for the new paradigms :'-( require GUI (NetworkManger, hal etc.). So if on just command line you have to be root, looks like they believe only geeky admins need that, and need to edit config files or type big commands. Any way so that this is easy in Ubnubtu through shell again.

    Read the article

  • When RDP as a Domain User, Smart Card Requested

    - by Paul
    My W8 machine is connected to domain zen. If I rdp to the W8 machine, I can log in as a local user without problems. If I try to log in as a domain user, I am prompted for a smart card instead of a password. Any ideas why? Note that Interactive login: require smart card is disabled in group policy: And here is the output from rsop.msc: Some additional information on this one. If my connecting machine is on the same domain/network as the W8 machine, then I am prompted for a password as usual. If the machine is remote, on a different domain, then I am prompted for a smart card. In addition, the machine I am connecting from that gets the smartcard prompt is an XP box. I haven't isolated exactly which of these factors triggers the different response.

    Read the article

  • How do I split an internet connection into 4 equal connections?

    - by luis velasco
    My 4 roomates and I have a problem: One of my roomies is downloading torrents all the time. When somebody need make a call or doing something like you tube or a quiz using the internet conection. The internet is very slow.... I can not create a network using a computer as a proxy. I just need a good router (and in the budget no more than $50).. I just want to split a 16MB connection into a separated 4 x 4 mb (theoretically)..

    Read the article

  • Business Forces: SOA Adoption

    The only constant in today’s business environment is change. Businesses that continuously foresee change and adapt quickly will gain market share and increased growth. In our ever growing global business environment change is driven by data in regards to collecting, maintaining, verifying and distributing data.  Companies today are made and broken over data. Would anyone still use Google if they did not have one of the most accurate search indexes on the internet? No, because their value is in their data and the quality of their data. Due to the increasing focus on data, companies have been adopting new methodologies for gaining more control over their data while attempting to reduce the costs of maintaining it. In addition, companies are also trying to reduce the time it takes to analyze data in regards to various market forces to foresee changes prior to them actually occurring.   Benefits of Adopting SOA Services can be maintained separately from other services and applications so that a change in one service will only affect itself and client services or applications. The advent of services allows for system functionality to be distributed across a network or multiple networks. The costs associated with maintain business functionality is much higher in standard application development over SOA due to the fact that one Services can be maintained and shared to other applications instead of multiple instances of business functionality being duplicated via hard coding in to several applications. When multiple applications use a single service for a specific business function then the all of the data being processed will be consistent in terms of quality and accuracy through the applications. Disadvantages of Adopting SOA Increased initial costs and timelines are associated with SOA due to the fact that services need to be created as well as applications need to be modified to call the services In order for an SOA project to be successful the project must obtain company and management support in order to gain the proper exposure, funding, and attention. If SOA is new to a company they must also support the proper training in order for the project to be designed, and implemented correctly. References: Tews, R. (2007). Beyond IT: Exploring the Business Value of SOA. SOA Magazine Issue XI.

    Read the article

  • Is there a media player that allows me to group together radio streams which are just mirrors of the

    - by rakete
    I find it really annoying that for some radio stations, which have two or more servers to cope with the network load, there is not one single entry in amaroks playlist but two or more entries. This makes it hard to pick the radio station from the list I like to listen to because all the entries are always shown with the last played track as name, and even if I only have a few radio stations in my list there will eventually be many different entries. And, if I use the keyboard shortcuts to navigate the playlist I always have to remember that radio station X has for example four entries in the playlist, so I have to press the shortcut for switching tracks four times to actually switch the to the next station. Now, ideally I would like some solution for amarok, but if someone knows of another media player that does this or something I would appreciate that information as well.

    Read the article

  • Ubuntu Server 12.04, NAT, Router, DNS. It just doesnt work

    - by Bjørnar Kibsgaard
    I recently inherited some server hardware from work and decided that it could be my main router at home (among other things). Ubuntu 12.04 server installation harware wise goes well and everything is found and working when I boot up. So I begin with setting up eth1 with DHCP. This works fine and it gets a public IP address from my modem and we have a working internet connection. Then I set up my other NIC (eth0) as static (192.168.0.1) and this also works fine. I can access it from other computers in the network. The problems are coming when I am trying to set up a DHCP server with isc-dhcp-server. It seems like it is working and giving the computers IP adresses but after one reboot it stops working. After the reboot eth1 will get a public ip from the modem but it doesnt have internet access. I have to manually run dhcpcd eth1 to get it to work again. As far as I know I havent made any changes to DNS. What am I doing wrong? I have never really had problems with this before. :)

    Read the article

  • Dhcp clients fail after successful import of server to new machine (win2k3)

    - by Tathagata
    I transfered the configs of a dhcp server from one server to another both running Windows Server 2003 R2 following http [://] support.microsoft.com/kb/325473. The new server has a statically configured ip(outside the scope) like the old one. Stopped the server on the old, and started up in the new server (authorized too) - but when I ipconfig /renew from a client its network interface fails with all 0.0.0.0 (or 169...*). I read somewhere I need to reconcile the scope to sync the new registry values ('ll try this tomorrow). What other troubleshooting steps can I take other than these (which didn't help)? Things work fine when the old server resurrects and the new one is taken down. The new server showed there was no requests for offer.

    Read the article

  • Using DNS entries to determine location

    - by Raphink
    I'm trying to think of a clean way to determine the location of machines (mainly, which datacenter they belong to) based on their network settings. I would like it to be dynamic, and I'm thinking of using special DNS records that would be specific to the DNS server in each datacenter. For example, you could have: root@machine1# dig TXT mysite ... mysite 3600 IN TXT "DC1" ... root@machine2# dig TXT mysite ... mysite 3600 IN TXT "DC2" ... etc. I know that DNS has a special LOC record for location, but it takes coordinates, so it doesn't help in my case. Is there a standard way of addressing this issue, another special type of record for it, or some standard entries in TXT records?

    Read the article

  • Configure iptables with a bridge and static IPs

    - by Andrew Koester
    I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs): eth0 \- br0 - 1.1.1.2 |- [VM 1's eth0] | |- 1.1.1.3 | \- 1.1.1.4 \- [VM 2's eth0] \- 1.1.1.5 My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed). Thanks!

    Read the article

  • Suggestions for someone wanting to become a Laptop Reseller

    - by Josh B.
    First of all, to give you a little background... I have had my Microsoft A+ for several years now. I used to run a small business repairing Xbox 360 consoles and I am currently a Network Engineer for a company in Ohio. I definitely know my way around a computer. I miss running a small business on the side and I'd like to get something going again. There is a really high demand in my area for Laptops and I was thinking about starting a small Laptop store out of my house. What is the best way to do this? I was assuming that the best way would be to buy barebones systems and build them yourself. If this is the best method, I would be very interested in any resources to get parts and such. Apparently Laptop parts aren't the easiest thing to come by (especially at a good price). Does anyone have any suggestions about how to get something like this going?

    Read the article

  • win 2008 run app from shared folder

    - by Jirka Kopriva
    I have shared folder with an app on win 2008 server. After successful maping of this shared folder from other PC in local network can be open only text files and images. App (.exe) cannot be run. (App works fine, is runing on other server win 2003. Win 2008 is new instalation on new machine.) Is there extra setting to allow it? Loged as administrator Ganted all permission to account in sharing properties (read, write etc.)

    Read the article

  • Copy Ubuntu distro with all settings from one computer to a different one

    - by theFisher86
    I'd like to copy my exact setup from my computer at work to my computer at home. I'm trying to figure out how to go about doing that. So far I've figured this much out. On the source computer run dpkg --get-selections > installed-software and backup the installed-software file Backup /etc/apt/sources.list Backup /usr/share/applications/ to save all my custom Quicklists Backup /etc/fstab to save all my network mounts Backup /usr/share/themes/ to save the customization I've done to my themes I'm also going to backup my entire HOME directory. Once I get to the destination computer I'm going to first do just a fresh install of 11.10 Then I'll copy over my HOME directory, /etc/apt/sources.list, /usr/share/appications, /etc/fstab and /usr/share/themes/ Then I'm going to run dpkg --set-selections < installed-software Followed by dselect That should install all of my apps for me. I'm wondering if there's a way/need to backup dconf and gconf settings from the source computer? I guess that's my ultimate question. I'd also like any notes on anything else that might need backed up as well before I undertake this project. I hope this post is legit, I figured other people would be interested in knowing this process and I don't see any other questions that seem to really document this on here. I'd also like to further this project and have each computer routinely backup all the necessary files so that both computer are basically identical at all times. That's stage 2 though...

    Read the article

  • DNS Help: Move domain, not mailserver

    - by Preserved
    I'm in the middle of launching a new website for an already-in-use domain. The domain has a complicated email system so we'd like to move that over to the new server a bit later on. Currently the domain DNS is managed by the current webhost. I plan on moving the DNS management back to Network Solutions, then point the A record to the new website's IP. However, currently the DNS has the MX record the same as the A record. When NetworkSolutions is managing the DNS, and I point the A record to the new IP, then the MX record can't be the A record.. Right now: A Record mydomain.com points to IP address 198.198.198.198 MX record mydomain.com points to IP address 198.198.198.198 What I want: A Record mydomain.com points to IP address of new server MX record somehow points to current existing mailserver Does this even make sense?

    Read the article

< Previous Page | 964 965 966 967 968 969 970 971 972 973 974 975  | Next Page >