Search Results

Search found 20625 results on 825 pages for 'client'.

Page 574/825 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Would requiring .Net 4.0 act as a bar to adopting our software in a corporate environment?

    - by Sam
    We are developing a software product in .Net targeting large corporates. The product has both server and desktop client components. We anticipate that our product will be used by a small subset of workers in the corporation - probably those in the Finance function. We currently require .Net 3.5 but are considering moving to .Net 4. Could anyone with experience of managing IT in such an environment tell us whether requiring .Net 4.0 at this stage would be a bar to adopting our software? What attitudes prevail regarding the use of frameworks like .Net?

    Read the article

  • Start TLS and 389 Directory

    - by Kyle Flavin
    I'm trying to configure Start TLS on 389 Directory server, but I'm having all sorts of issues. I've been following this doc: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/managing-certs.html which specifies that I should create a certificate for both the directory server and admin server. I've imported the CA cert on both servers. I've tried to use the same server certificate for both. It will not allow me to do so. However, the admin and directory servers reside on the same host. If I generate a new certificate it will need to use the same hostname. I'm not sure if that's valid... Has anyone out there set this up before? Any direction would be helpful. I have multmaster replication set up. From an external client, I'm attempting to do an ldapsearch -ZZ -x -h "myhost" -b "dc=example,dc=com" -D "cn=Directory Manager" -W "", and I'm getting a protocol error.

    Read the article

  • Oracle Virtual Desktop Infrastructure

    - by Fat Bloke
    A lot of the recent blog entries here have been about Oracle VM VirtualBox, possibly the coolest personal desktop virtualization product known to man. Deploying VirtualBox on your PC or Mac lets you run many virtual desktops at the same time to one user, you. But did you know that VirtualBox can also power an Enterprise-scale virtual desktop deployment too, delivering many desktops to many users?  As part of another Oracle product, Oracle Virtual Desktop Infrastructure (VDI), VirtualBox can run your Windows, Linux or Solaris desktops on servers located in the datacenter. Oracle VDI orchestrates the whole deal by looking after : creating or cloning the virtual desktops from a master template; managing the lifecycle of the desktops (create, start, suspend, resume, stop, delete); assigning which users get which desktops;  delivering easy and fast access to these virtual desktops from almost any device, such as existing PCs or Macs, iPads, or specially designed Sun Ray client devices too; load balancing and session management of all of this.  Architecturally the solution looks something like this: This is an increasingly hot area of the IT landscape, so the Fat Bloke has decided to create a new blog category (VDI) and dedicate a few blog entries to look into this in a bit more detail over the next few weeks. Watch this space... - FB 

    Read the article

  • Win XP Pro, IIS 5.1, PCI Compliance

    - by Mudman266
    I have a client that was scanned and determined not to be PCI Compliant. I looked and they had IIS setup to allow a program from central office to push/pull info from their server. Many of the reasons they failed appeared to have been fixed in SPs (they were on SP2) or security updates. I fully patched the server to (Windows XP Pro) SP3 with all optional updates. I had them scan again and again they failed with only one less vulnerability that I manually corrected (server was showing debugging/error messages). The main issue I'm having is that when I research the CVE code for each error, they say they are fixed in SP2 and up. I'm wondering if I need to remove IIS and resetup since I have patched to SP3. Any ideas?

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

  • Converting an Small Business Server to a Workstation

    - by noway
    I am planning to buy a Dell PowerEdge T110 Server and convert it to workstation similar to Dell PowerEdge Precision T1700. The reasoning behind is the cost, if I do it by myself, it costs two times cheaper. However, I wonder what might go wrong in this way? The things I have thought of are: Client OSes are not officially supported. There might be some driver problems. The chasis is designed to be a server case, so there are not many useful inputs in the front side of the case. The server boots slower than usual PCs. What might else be a problem?

    Read the article

  • Multiple Apps - One SSL

    - by Optix App Development
    I'm trying to configure a domain and SSL to run multiple Facebook apps through the SSL. What I need advice on is routing the apps through the SSL without actually hosting them on that server. Ideally they would be hosted on the client's server. Any advice on how to do this? UPDATE Following the advice from the replies I have setup a domain which houses my Facebook apps under one SSL. So far this is working well. Thanks guys. :)

    Read the article

  • How to bridge two networks via VPN (IPsec)?

    - by polemon
    I'd like to do a Site-to-Site bridging with VPN (IPsec), how do I do that? On the local side, I have a DrayTec Vigor2910, it is supposed to be able to manage IPsec tunnels. Anyway, I need to have several VPN tunnels to various sites, but how exactly do I do that, If the only router I can configure, is the local one? As I understand it, I'd need some sort of VPN server or client, or whatever on the other side. In any event, please clarify that issue. Thanks.

    Read the article

  • Server 2003 Filter mobile devices via MAC

    - by msindle
    At one of my client's sites I need to keep my users unauthorized devices off the wireless. They all know the SSID and Password because many of them have laptops that need the wireless. I'm running out of IP address's and we have sent out numerous emails asking them to stay off, but like most users they ignore IT's email. I'm currently running Server 2003 as the GC/DC (but have 2008 servers in place) and 2 Netgear WNAP320. I've seen several posts similar to what I'm looking for but they seem to deal with Linux. My question is how do I go about doing this without migrating (scheduled for the end of the year) to a new server and is it possible to do this within Server 2003? Thanks msindle

    Read the article

  • Multiple redirects / rewrites within one VirtualHost group

    - by Benjamin Dell
    Hi, I have a client that now wants to point a couple of dozen urls to their main site. I have added them as serveralias's in the sites apache config file... so now all of these urls point to the primary one... excellent. The problem i have is that if ANY of these alias's are accessed at the root (i.e. www.domain.com rather than www.domain.com/some-page/) then i need to redirect them to a specific page within the site (i.e. anyone accessing domain.com might need to be sent to domain.com/special-landing-page/). However, any visit to anything other than the landing page should just continue as normal without any re-directs. I've been battling with this for a few hours and can't seem to find the best solution. Does anyone have any suggestions?

    Read the article

  • multi-thread in mmorpg server

    - by jean
    For MMORPG, there is a tick function to update every object's state in a map. The function was triggered by a timer in fixed interval. So each map's update can be dispatch to different thread. At other side, server handle player incoming package have its own threads also: I/O threads. Generally, the handler of the corresponding incoming package run in I/O threads. So there is a problem: thread synchronization. I have consider two methods: Synchronize with mutex. I/O thread lock a mutex before execute handler function and map thread lock same mutex before it execute map's update. Execute all handler functions in map's thread, I/O thread only queue the incoming handler and let map thread to pop the queue then call handler function. These two have a disadvantage: delay. For method 1, if the map's tick function is running, then all clients' request need to waiting the lock release. For method 2, if map's tick function is running, all clients' request need to waiting for next tick to be handle. Of course, there is another method: add lock to functions that use data which will be accessed both in I/O thread & map thread. But this is hard to maintain and easy to goes incorrect. It needs carefully check all variables whether or not accessed by both two kinds thread. My problem is: is there better way to do this? Notice that I said map is logic concept means no interactions can happen between two map except transport. I/O thread means thread in 3rd part network lib which used to handle client request.

    Read the article

  • Is there a way to have tortisesvn use Windows 7 kerberos tickets to auth against an apache svn server?

    - by jmp242
    I have putty able to use gssapi on my Windows 7 x64 clients against kerberos logins for SSH. I.e. it forwards the ticket you get when you log in to windows. I can't figure out how to get tortiseSVN to do the same. I can get it to prompt me for my credentials every time I do ANYTHING and they work, by changing from neon to serf in the config file. But I need it to use the ticket so I don't have to continually type in my username and password. If Tortise can't do this, does anyone know of an svn client for Windows that does?

    Read the article

  • How do I know if 'hg clone' is doing the work remotely?

    - by jjfine
    I've got a very simple windows install of Mercurial on my machine. The 'central' repository is located at //mymachine/hg-repos/central. I want remote (VPN) users to be able to create clones of this repository in the hg-repos directory because it gets daily backups. I have given these users full control of the hg-repos directory. My question is this: If I'm on a remote machine, and I run the command: hg clone //mymachine/hg-repos/central //mymachine/hg-repos/central-copy ...is the remote machine doing most of the work? I don't want the client to have to download all of the central repository and then upload it all back because people are going to be using this from across the country. But I suspect this is what's happening here since it works so easily.

    Read the article

  • Returning a flexible datatype from a C++ function

    - by GavinH
    I'm developing for a legacy C++ application which uses ODBC for it's data access. Coming from a C# background, I really miss the ADO style of data access. I'm writing a wrapper (because we can't actually use ADO) to make our data access less painful. This means no char arrays, no manual text blob streaming, and no declaritive column binding. I'm struggling with how to store / return data values. In C# at least, you can declare an object and cast it to whatever (as long as the type is convertable). My current C++ solution is to use boost::any to store the data value in a custom DataColumnValue object. This class has conversion and assignment operators to the various types used in our app (more than 10). There's a bit of complexity here because if you store an int in the boost::any and try to boost::any_cast<long> you get a boost::bad_any_cast. Client objects shouldn't have to know how the value is stored internally. Does anyone have any experience trying to store / return values whose types are only known at runtime? Is there a better / cleaner way?

    Read the article

  • Problem with the hosts file under windows 7

    - by martani_net
    I updated some entries in the hosts file "C:\WINDOWS\System32\drivers\etc" to make google for example point to 127.0.0.1 # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost ::1 localhost 127.0.0.1 google.com This works fine under windows Vista, but not under Widows 7. When I type google, it goes directly to Google's website. For info, I am not using a proxy server. I think there are some temporary DNS settings that must be flushed, but I don't know how, anyone knows how to fix this? Thank you.

    Read the article

  • Is it possible to cause artificial network packet loss or latency?

    - by nbolton
    I'm trying to reproduce some issues on a deployed application where the MSSQL server and client are running in two separate machines. I think there may be network issues between the two machines, so I'd like to try and reproduce these conditions on two Hyper-V virtual machines (on the same virtual server). Of course, the network for these virtual machines is "local" so it's actually far from the conditions in a live environment. Is there a program I can run on either virtual machine which will degrade the network performance? Or maybe any other work arounds? For example, one way to reproduce the conditions may be to run the VMs on separate Hyper-V servers in geographically dispersed locations (so the SQL traffic goes over VPN or something) -- but this is a little long winded I think. There must be a simpler way.

    Read the article

  • How does hadoop decide what its nodes hostnames are?

    - by Dan R
    Currently the urls generated by the jobtracker & namenode return either hostnames like bubbles.local or just bubbles. These end up not resolving unless the client machine has specified these in their /etc/hosts file. When I run the hostname command on these machines it returns a hostname complete with the domain (E.G bubbles.example.com) Running a small java test on these machines InetAddress addr = InetAddress.getLocalHost(); byte[] ipAddr = addr.getAddress(); String hostname = addr.getHostName(); System.out.println(hostname); Produces output just like the hostname command. Where else could hadoop be grabbing a hostname to use in its jobtracker / namenode UI? This is occurring in clusters with Hadoop 1.0.3 and 1.0.4-SNAPSHOT from early august. The machines are running CentOS release 5.8 (Final). The generated URLs I'm referring to are like this http://example:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/ or http://example.local:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/

    Read the article

  • Ubuntu Linux -- create custom burnable/bootable DVD image?

    - by ashgromnies
    I recently developed some kiosk software that runs on Ubuntu Linux, and my client needs me to set up ten more computers with the complete software package(and that number will only grow in the future). So I'm looking for a way to make this less of a pain in the neck and prevent me from shooting myself in the foot -- I had to disable some things on the installations of the operating systems like screensavers, automatic updates, etc. that would pop up and disrupt the kiosk operation. I don't feel comfortable doing that by hand across 10 computers, it seems stupid. Does anybody have recommendations for software that would let me burn an installable DVD with a complete image of the hard drive from one of the devices? I've looked at Clonezilla, G4L, and PartImage and I'm still not quite sure if any of them offer what I need. I know PartImage for sure won't work, because it doesn't support Ext4.

    Read the article

  • Hide unauthorized folders from view in webdav

    - by eric.s
    We have a Windows server running webdav. When a Mac connects to it they see the folders they do not have access to as a blacked out folder, and then the folders they do have access to as folders. This does not happen when a client using Linux or Windows connects. All users connect with their domain account credentials. I may not be searching properly to get the information I am looking for, so I turn here. Is this an issue that I should be researching on the Mac side, or ask our network team to be looking into settings on the server?

    Read the article

  • Top causes of slow ssh logins

    - by Peter Lyons
    I'd love for one of you smart and helpful folks to post a list of common causes of delays during an ssh login. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays. Between issuing the ssh command and getting a login prompt and between entering the passphrase and having the shell load Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings. What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?

    Read the article

  • Can't print graphic from Publisher to pre-printed document

    - by Michael Itzoe
    I have a client who was given a stack of 8.5x11 tri-fold (unfolded) brochures printed on standard printer paper that they need to add their bulk mail permit stamp to. They've created a Publisher document with the stamp. If they print the document to standard paper in the tray, it prints fine. When they load the brochures into the tray, the brochure feeds through the printer but the stamp doesn't print. The stamp document is also 8.5x11. The printer is a Canon MF4300-D44. I can remoted in, but obviously have no access to the hardware. Any ideas?

    Read the article

  • Access MacZFS via network from XBMC

    - by AreusAstarte
    I have a ZFS RAID (zpool with three drives) hooked up to my Mac that I want to share in my LAN so that the XBMC client on my OUYA console hooked up to the television can read the drive and use it to stream my movies and television shows onto the television set. I've searched around for a bit but so far haven't found anything that helped me with it. I know that when connecting to the Mac with SSH I can't just access the drive due to different formatting. What do I have to do so that XBMC will be able to read it? How do I share it?

    Read the article

  • software to view logs on remote server

    - by Tarun
    I have configured syslog server on my linux machine like this Linux Machine (syslog client) -- Linux Machine (syslog server) I have configured it and its working properly. Now, the problem is I want to look at the logs located on remote server in a particular directory for example /var/log/host-ip/httpd.log. One way is that I use ssh for that and then see those logs using commands like tail etc. So is there any Browser based graphical utility which I can use on my machine in which I just give the remote folder location, ip-address etc. and the software shows me the logs present on that remote server. Thanks.

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Distributed filesystem across a slow link

    - by Jeff Ferland
    I have an image in my head where a link is too slow to realize the real-time transfer of files, but fast enough to catch up every day. What I'd like to see is a master <- master setup where when I write a file to Server A, the metadata will transfer to Server B immediately and the file will transfer at idle or immediately when Server B's client tries to read the file before Server A has sent it. It seems that there are many filesystems which can perform well over fast links, but I don't know of any that do well with a big bottle neck and a few hours of latency.

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >