Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 418/750 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • SSH with X11 forwarding to host where I don't have a home-dir

    - by Albert
    I am trying to ssh with X11 forwarding into a host where I don't have a home directory. Because of that, xauth fails and X11 doesn't seem to work. I tried to specify a home-directory in advance but I guess it doesn't export env-vars to the host. zeyer@demeter:~> HOME=/tmp ssh ares -XY Password: Warning: No xauth data; using fake authentication data for X11 forwarding. Last login: Mon Mar 28 11:52:57 2011 from demeter.matha.rwth-aachen.de Have a lot of fun... Could not chdir to home directory /home/zeyer: No such file or directory /usr/bin/xauth: error in locking authority file /home/zeyer/.Xauthority zeyer@ares:/> Is there any trick I can make the X11 forwarding work? I still have write access to /tmp. But I am not sure how to setup the xauth fake authentication data manually.

    Read the article

  • Apache + SuExec + php-fpm - how to set them up?

    - by FractalizeR
    Hello. I wonder if there is a good guide on how to setup Apache + SuExec + php-fpm? I have a server which I am going to use several separate website. So, I need php to be run as site-owner user. As I can see, php-fpm is a little different from php-fcgi. Is there a need in mod_fcgid from Apache in this case? How to set this all up? For now my site is running Apache + mod_suphp + php-cgi, so... it's good, but a little slow. I want to preserve security and gain an ability to use APC.

    Read the article

  • If fiber runs 1gig fine, are there any concerns when considering upgrading to 10gig transceivers?

    - by Eric
    We had fiber installed (connecting ~10 buildings) around 5 years ago and it has been working great. The initial setup involved Procurve 2848 and 2824 switches w/ 1gig transceivers. However, lately we have been considering upgrading our network both to increase bandwidth and possibly add VOIP. However, a lot of this is assuming that we can just use pop the existing fiber into 10gig XFP transceivers in better switches and call it a day. If the fiber works fine at 1G does that mean it should be fine for 10gig? If not, how can we confirm that our existing fiber trunks will work, preferably in an affordable fashion?

    Read the article

  • Wildcard DNS setting in Windows Server 2008 R2 DNS Server not working

    - by mattmcmanus
    We've got a windows server 2008 R2 DNS server that we are trying to setup a wildcard DNS entry in. So we want proxy.domain.com and *.proxy.domain.com to go to the same IP. Right now, it seems as if the windows server has registered the actual asterisk as the subdomain. So *.proxy.domain.com resolves to the right IP but something like login.proxy.domain.com doesn't. This seems to be a problem specifically with 2008 because we were able to get this working on a 2003 server. Has anyone come across this yet?

    Read the article

  • ntpstat response fine but server time out of sync

    - by zedoo
    Hi, I found out that the ntpd service that I've set up a few weeks ago on a Centos5 machine doesn't correctly synchronize the server time. I detected an offset of more than 5 minutes (by stopping ntpd and executing ntpdate). After setting up the service I checked the setup via ntpstat: [xxxx@xxx ~]$ ntpstat -q synchronised to local net at stratum 11 time correct to within 10 ms polling server every 1024 s I repeated this check every day and it always showed this output. Doesn't this output tell me that the server time is sane?

    Read the article

  • Connect work laptop (domain) to home workgroup

    - by jjeaton
    Is there an easy way to have my work laptop connect to a home workgroup for file sharing with my other PCs, but then easily switch back to connecting to my work domain when I'm at work? I have the following setup: Windows 7 Home Premium Server/HTPC 2 Windows XP laptops 1 Vista laptop (Work) The work laptop connects to a work domain, the remaining computers are on a home workgroup for sharing files/printer. Also is it possible to share files over my LAN while I'm connected to the work domain, but at home? I've tried Live Mesh, but my 2 home laptops are very slow and don't work well with it. I also use Dropbox, but I'd like to be able to share larger files. I may be missing a simple solution here...

    Read the article

  • Multi-monitor aterm transparency

    - by Bryan Ward
    I have 3 monitors which I set the background with using xpmroot my-5760x1200bg.png I then setup aterm to use transparency by adding the following to my ~/.Xdefaults file. aterm*transparent:true aterm*shading:60 aterm*background:Black aterm*foreground:White aterm*scrollBar:true aterm*scrollBar_right:true aterm*transpscrollbar:true aterm*saveLines:32767 aterm*font:*-*-fixed-medium-r-normal--*-140-*-*-*-*-iso8859-1 aterm*boldFont:*-*-fixed-bold-r-normal--*-*-140-*-*-*-*-iso8859-1 I am getting transparency on my aterm windows, but the image that is coming through with the transparency isn't correct. On the left monitor things are fine, but the middle and right monitors both seem to use the leftmost 1920x1200 of the background image as what is behind the terminal window. It would be as if every screen had the same background as the monitor on the left. Is this something that can be configured to be correct, or is this a bug? I'm running Gentoo Linux with Xmonad.

    Read the article

  • Zyxel p-2602HW-1DA - LAN to WAN routing problems

    - by Garrett
    Hi Got a new router yesterday (due to new internet supplier) and now all my requests for my own server (local lan) is routed directly to the router instead of the server, when using dns. Ex. I have a website www.mysite.org running on my server at home (local lan). From work I can access it via www.mysite.org, which is great. But from home (local lan) my request's for www.mysite.org gets rerouted to the routers web admin interface My last router didn't do this. My new router is a Zyxel P-2602HW-1DA, my old one was a LinkSys WRT-54GC V. 2.0. There's a rather wierd WAN-LAN, WAN-WAN setup interface which I cant really comprehend yet and the docs are rather vague. Have anyone had the same problem and can anyone guide me to a solution, would nice not write the ip address everytime i need to access the server on local lan. :). Kind regards Garrett

    Read the article

  • Pull Request Changes, Multi-Selection in Advanced View, and Advertisement Changes

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Changes In this release, we have begun to re-focus on Pull Requests to ensure a productive experience between the project users and developers. We feel we made significant progress in this area for this release and look forward to using your feedback to drive future iterations. One of the biggest hurdles people have indicated is the inability to see what a pull request includes without pulling the source down from a Mercurial client. With today’s changes, any user has the ability to view a pull request, the changesets / changes included, and perform an inline diff of the file. When a pull request is made, the CodePlex website will query for all outgoing changes from the fork to the main repository for a point-in-time comparison. Because of this point-in-time comparison… All existing pull requests created prior to this release will not have changesets associated with them. If new commits are pushed to the fork while a pull request is active, they will not appear associated with the pull request. The pull request will need to be re-submitted for them to appear. Once a pull request is created, you can “View the Pull Request” which takes you to a page that looks like As you may notice, we now display a lot more detailed information regarding that pull request including who it was requested by and when, the associated changesets, the description, who it’s assigned to (we’ll come back to this) and the listing of summarized file changes. What you’ll also notice, is that each modified file has the ability to view a diff of all changes made. When you click “(view diff)” for a file, an inline diff experience appears. This new experience allows you to quickly navigate through all of the modified files as well as viewing the various change blocks for each file. You’ll also notice as you browse through each file’s changes, we update the URL to include the file path so you can quickly send a direct link to a pull request’s file. Clicking “(close diff)” will bring you back to the original pull request view. View this pull request live on WikiPlex. Pull Request Review Assignment Another new feature we added for pull requests is the ability for project members to assign pull requests for review. Any project member has the ability to assign (and re-assign if needed) a pull request to a project member. Once the assignment has been made, that project member will be notified via email of the assignment. Once they complete the review of the pull request, they can either accept or deny it similarly to the previous process. Multi-Selection in Advanced View Filters One of the more recent requests we have heard from users is the ability multi-select advanced view filters for work items. We are happy to announce this is now possible. Simply control-click the multiple options for each filter item and your work item query will be refined as such. Should you happen to unselect all options for a given filter, it will automatically reset to the default option for that filter. Furthermore, the “Direct Link” URL will be updated to include the multi-selected options for each filter. Note: The “Direct Link” feature was released in our previous deployment, just never written about. It allows you to capture the current state of your query and send it to other individuals. Advertisement Changes Very recently, the advertiser (The Lounge) we partnered to provide advertising revenue for projects, or donated to charity, was acquired by Lake Quincy Media. There has been no change in the advertising platform offering, and all projects have been converted over to using the new infrastructure. Project owners should note the new contact information for getting paid. The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • How do I load tmx files with Slick2d?

    - by mbreen
    I just started using Slick2D and learned how simple it is to load in a tilemap and display it. I tried atleast a dozen different tmx files from numerous examples to see if it was the actual file that was corrupted. Everytime I get this error: Exception in thread "main" java.lang.RuntimeException: Resource not found: data/maps/desert.tmx at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.tiled.TiledMap.<init>(TiledMap.java:101) at game.Game.init(Game.java:17) at game.Tunneler.initStatesList(Tunneler.java:37) at org.newdawn.slick.state.StateBasedGame.init(StateBasedGame.java:164) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:390) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at game.Tunneler.main(Tunneler.java:29) Here is my Game class: package game; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import org.newdawn.slick.tiled.TiledMap; public class Game extends BasicGameState{ private int stateID = -1; private TiledMap map = null; public Game(int stateID){ this.stateID = stateID; } public void init(GameContainer container, StateBasedGame game) throws SlickException{ map = new TiledMap("data/maps/desert.tmx","maps");//ERROR } public void render(GameContainer container, StateBasedGame game, Graphics g) throws SlickException{ //map.render(0,0); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException{ } public int getID(){return stateID;} } I've tried to see if anyone else has had similar problems but haven't turned up anything. I am able to load other files, so I don't believe it's a compiler issue. My menu class can load images and display them just fine. Also, the filepath is correct. Please let me know if you have any pointers that might help me sort this out.

    Read the article

  • Password problem while creating domain

    - by Murdock
    Hi, I'm freshman so far in server management stuff but this seems to be clearly against logic. After updating my Windows Server 2008 Standard 32bit, installing DNS server and AD DS I wanted to create domain via using CMD and dcpromo.exe setup. But no matter if I disable demand for comlex password in Password policies or create a password which fully comply with requirements for strong and complex password, still I can't get any further and it says that my password doesn't meet requirements. I'm also asked there to activate password demand by NET USER -passwordreq:yes and when I do so, this password doesn't work any more and I have to remove it from other admin account to be at least able to login with proper Administrator account.

    Read the article

  • Cisco ASA - VPN and Hairpinning....

    - by Nordberg
    Hi, We have 2 sites that will be linked by a IPSEC VPN between 2 Cisco ASAs: Site 1 8Mb ADSL Connection Cisco ASA 505 Site 2 2Mb SDSL Connection Cisco ASA 505 Basically, both sites need access to a service at the end of another IPSEC VPN, Site 3, which I plan to terminate at Site 2. This is due to the way the service is sold - it's billed per gateway. So if both Site 1 and Site 2 had their own VPN connection to Site 3, it would cost us twice as much... Anyway, my idea is to have all traffic from Site 1 destined for Site 3 to go via the VPN between Site 1 and Site 2. The end result being all traffic that hits Site 3 has come via Site 2. I understand this is known as hairpinning but I'm struggling to find a great deal of information on how this is setup. So, firstly, can anyone confirm that what I'm trying to achieve is possible and, secondly, can anyone point me in the direction of an example of such a configuration? Many Thanks.

    Read the article

  • Huawei HG8245T Router - not connecting to another router

    - by BubbaK
    I just got a fiber optic connection installed at home and due to the build of the house, it's mostly cement/bricks, the wireless connection isn't received throughout the house. Previously, to combat this issue, I just ran a hardwire from the modem downstairs and upstairs and had it connected to a secondary (DLink) router. It did the job of getting me seamless wireless internet access everywhere. The issue is now with the new Huawei router, this setup isn't working. I have connected everything as previously, but it seems that the other (DLink) routers are not picking up the connection. I have tried everything and am totally lost as what to do to overcome this problem. Any help would be appreciated.

    Read the article

  • Azure VM with many IPs or SSL certificates

    - by timmah.faase
    I am looking to move our hosting environment to Azure and by doing so have created a sandpit VM to figure things out. We host around 300-400 websites in IIS and about 2% of these sites have unique, non wildcard certificates all requiring a unique public IP in our current setup. Can you get a range of IPs pointing to 1 VM/Endpoint? Or is it possible to create an SSL proxy? I've never created an SSL proxy but like the idea of it. I'd need advise here on how to proceed if this is the best option. Sorry if this has been answered! Sorry also if my question isn't worded eloquently.

    Read the article

  • EPM 11.1.2.2 Architecture: Essbase

    - by Marc Schumacher
    Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

    Read the article

  • Git Project Dependencies on GitHub

    - by VirtuosiMedia
    I've written a PHP framework and a CMS on top of the framework. The CMS is dependent on the framework, but the framework exists as a self-contained folder within the CMS files. I'd like to maintain them as separate projects on GitHub, but I don't want to have the mess of updating the CMS project every time I update the framework. Ideally, I'd like to have the CMS somehow pull the framework files for inclusion into a predefined sub-directory rather than physically committing those files. Is this possible with Git/GitHub? If so, what do I need to know to make it work? Keep in mind that I'm at a very, very basic level of experience with Git - I can make repositories and commit using the Git plugin for Eclipse, connect to GitHub, and that's about it. I'm currently working solo on the projects, so I haven't had to learn much more about Git so far, but I'd like to open it up to others in the future and I want to make sure I have it right. Also, what should my ideal workflow be for projects with dependencies? Any tips on that subject would also greatly appreciated. If you need more info on my setup, just ask in the comments.

    Read the article

  • rsync error unexplained error (code 255) at io.c

    - by kabeer
    I was using a script to perform rsync in sudo crontab. The script does a 2-way rsync (from serverA to serverB and reverse). After i reboot both the server machines, the rsync is not working in sudo crontab. I also setup a new cronjob and it fails, The error is: rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] rsync: connection unexpectedly closed (0 bytes received so far) [receiver] However, when run from terminal, the rync script works as expected without issues. please help.

    Read the article

  • Advise about performance for local or remote SQL Server?

    - by TruMan1
    I currently have my web server and SQL Express / MySQL server on the same server. It is on a VPS. I have been having problems with my hosting so I am thinking of separating the web and db server into 2 VPS servers. Does anyone recommend this? I am worried that changing my setup from a local DB server to a remote one will degrade performance heavily. They will not be on the same network, but will reference each other via an IP address. Anything I should be aware of?

    Read the article

  • In Debian, how can I route rtorrent to a certain network interface, say ppp0?

    - by Timo
    I have purchased a PPTP account from StrongVPN and configured the setup by these (http://pptpclient.sourceforge.net/howto-debian.phtml#configure_by_hand) instructions and now I want to have rtorrent do its communication to the Internet through this VPN tunnel. So I have a ppp0 interface, which has the VPN tunnel. What is the next step? I guess it has something to do with the routing tables? I am new to routing, so please be elementary and precise so that I understand! Thank you!

    Read the article

  • Need assistance setting up Linux Router with 2 public lans

    - by user195407
    I was assigned a.b.c.10/30 (Public IP) for my router and given a.b.c.9 as the gateway. I was also assigned x.y.z.128/25 (Public IP block) for my use. I want to setup a Linux router to handle this situation. My Linux box has 3 NICs, eth0 is a.b.c.10, eth1 I have assigned x.y.z.254, eth2 is unused at present. I have eth1 connected to a network switch, and several devices connected. Let's say box A is x.y.z.129 with a gateway of x.y.z.254. I have not connected to the network yet, as it is not live. What settings do I need to make, beyond adding the 2 network definitions to the cards and having "route add default gw a.b.c.9 eth0"? I may add a private 192.168.100.0/24 lan to eth2 later.

    Read the article

  • Dual boot new laptop win 7 / ubuntu 12.04 - 750gb + 32gb SSD

    - by Alex Waters
    I have just purchased a new HP dv7t-7000 and I would like to run Windows 7 / Ubuntu. How do I setup the dual boot? Can I install both operating systems with an 8gb USB drive? Can I still make use of the 32gb SSD? I'm unfamiliar with the efficacy of using an SSD for caching with a 750gb 7200rpm sata 3 drive. I can only see using it for windows 7 - which I have installed in order to play games. Thank you!

    Read the article

  • How do I get same scrollbar style for gtk-2.0 and gtk-3.0 apps?

    - by David López
    Sorry for my English mistakes, I'm Spanish. I'm using Ubuntu 11.10 in a tablet. I've removed overlay-scrollbars and I have increased the scrollbars size to use them with fingers. In /usr/share/themes/Ambiance/gtk-2.0/gtkrc I've changed: GtkScrollbar::slider-width = 23 GtkScrollbar::min-slider-length = 51 and added: GtkScrollbar::has-backward-stepper = 0 GtkScrollbar::has-forward-stepper = 0 In /usr/share/themes/Ambiance/gtk-3.0/gtk-widgets.css I've changed: GtkScrollbar-min-slider-length: 51; GtkRange-slider-width: 23; (in .scrollbar item) Now my scrollbars are usable with fingers, but they seem different for gtk-2.0 and gtk-3.0 apps. In the picture the left scrollbar is a gtk-2.0 app and the right one is a gtk-3.0 I want to setup gtk2.0 bar to be exactly the same as gtk3.0, that is Make upper and lower extremes empty (oranges circles in the picture) Reduce the length of the 3 horizontal lines (black ellipse) Can somebody help me? Thanks. Hola. Uso ubuntu 11.10 en una tableta; he quitado overlay-scrollbars y he incrementado el tamaño de las barras para poder usarlas con los dedos. Concretamente en /usr/share/themes/Ambiance/gtk-2.0/gtkrc be cambiado GtkScrollbar::slider-width = 23 GtkScrollbar::min-slider-length = 51 y añadido GtkScrollbar::has-backward-stepper = 0 GtkScrollbar::has-forward-stepper = 0 En /usr/share/themes/Ambiance/gtk-3.0/gtk-widgets.css he cambiado GtkScrollbar-min-slider-length: 51; GtkRange-slider-width: 23; (en el apartado.scrollbar) Mis barras son manejables con dedos, pero se ven muy distintas para aplicaciones gtk-2.0 y gtk-3.0. La barra de la izquierda de la imagen es 2.0 y la de la derecha es 3.0 Quiero configurar las barras 2.0 exactamente como las 3.0, para lo que necesito Vaciar los extremos de la barra (círculos naranjas en la imagen) Reducir la longitud de las 3 líneas horizontales (elipses negras en la imagen) ¿Alguna idea? Gracias.

    Read the article

  • Using Exim and Google Apps email as smarthost

    - by pferrel
    I have a server setup to use exim4 and google apps as my smarthost. But I get errors when the to address is not the one I use to authenticate to google and it seems to drop all return addresses that are not the one it uses to authenitcate. Example: On the contact form of my server a user sets [email protected] as their return address and uses the form to send a message. I get an email sent to the admin's address [email protected] but the return address is also now [email protected] I have no idea of the return address the user set on the form. I get around this by putting a bad email address in the form's default so Exim4 sends an error message to [email protected] with the user's email in the debug info. Clearly I either have it set up wrong or do not understand how smarthosts work (probably both).

    Read the article

  • Compiling the Linux kernel, how much size is needed?

    - by ant2009
    I have downloaded the newest most stable Linux kernel, 2.6.33.2. I thought I would test this using VirtualBox. So I create a dynamically sized harddisk of 4 GB. And installed CentOS 5.3 with just the minimum packages. I setup the make menuconfig with just the default settings. After that I ran make and got the following error: net/bluetooth/hci_sysfs.o: final close failed: No space left on device make[2]: *** [net/bluetooth/hci_sysfs.o] Error 1 make[1]: *** [net/bluetooth] Error 2 make: *** [net] Error 2 The amount of space I have left is: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 3.3G 3.3G 0 100% / /dev/hda1 99M 12M 82M 13% /boot tmpfs 125M 0 125M 0% /dev/shm My virtual size is 4 GB, but the actual size is 3.5 GB. $ ls -hl total 7.5G -rw-------. 1 root root 3.5G 2010-04-13 14:08 LFS.vdi How much size should I give when compiling and installing a Linux kernel? Are there any guidelines to follow when doing this? This is my first time, so just experimenting with this.

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >