Search Results

Search found 28222 results on 1129 pages for 'machine config'.

Page 758/1129 | < Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • AWS RDS Timeout

    - by warder57
    I know next to nothing about networking/servers. So I'm assuming I'm missing something obvious. All of the resources I can find on this, either don't work or are outdated. I created a brand new AWS account on the free plan. I created a postgres RDS DB instance. I made sure that this RDS instance is set to publicly accessible. This RDS instance has the default VPC/Security Group settings. In order to connect to this DB from my local machine, I used pgadmin3 and followed the instructions provided on the AWS documentation page. Seen here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html I've double checked all of the information required to connect: Host: whatever.whatever.us-west-2.rds.amazonaws.com Port: 5432 Username: USERNAME Password: PASSWORD When I try to connect to the database, my connection fails due to a timeout. (During step 4 in the above guide.) Can anyone point me to whatever I am missing? Thanks in advance

    Read the article

  • How can one send commands to the "inner" ssh session?

    - by iconoclast
    Picture a scenario where I'm logged into a server (which we'll call "Wallace") from my local machine, and from there I ssh into another server (which we'll call "Gromit"): laptop ---ssh---> Wallace ---ssh---> Gromit Then the ssh session from Wallace to Gromit hangs, and I want to kill it. If I enter ~. to kill ssh, it kills the ssh session from my laptop to Wallace, because the ~ is intercepted by that ssh session, and the . is taken as a command to kill the session. How do I send a command to the ssh session between Wallace and Gromit? How do I kill my "inner" ssh?

    Read the article

  • Any program or editor in windows 7 to run ".md" files

    - by Anmol Saraf
    I understand that '.md' is an extension for markdown format. While installing 'Grunt' from GitHub I see a lot of .md extension files inside node_module/grunt/docs folder. As per my understanding these files are supported by GitHub for documentation kind of things if I am not wrong. My question here is - Are there any editors/tools or programs available for Windows 7 where I can see these .md files executing ? When I try to open any of these file inside my text editor it displays in raw format with all '#" etc. keywords. I want to see the formatted version of these files so that without an internet connection also I can navigate the documentation on my machine. Thanks for helping !!

    Read the article

  • On RouterOS, how will transparent proxying (with DNAT) affect reporting of netflows?

    - by Tim
    I have a box running Mikrotik RouterOS, which is set up to do transparent web proxying, as described here. In short, this means that I have a firewall rule for destination NAT causing any port 80 traffic to get redirected to port 8080 on the router, which is received by the Mikrotik local web proxy. The local web proxy then makes the web request on the client's behalf, in this case to a parent web proxy server (which in turn does the real web request). My question is, how will this two-part process get reported in the logging of traffic flow information (netflows)? Looking at the logged information, what I seem to be seeing is this: One flow recorded from client machine (private IP) to remote proxy (8080) Another flow recorded from router to remote proxy (8080) The original request that the client made to port 80 isn't recorded. I want to write code to analyse traffic usage, so I want to be sure I'm not losing information if I discard the latter of these.

    Read the article

  • Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel

    - by JuergenKress
    As part of our current project the Build Management team asked for a solution to undeploy multiple composites at one time. Of course you have the “Undeploy All from This Partition” menu option in Enterprise Manager but since we have a lot of deployments every day the guys wanted to have a script solution. It is even more important for the nightly deployments on our continuous integration environment – strange, we couldn’t find anybody who wants to do the undeployment via Enterprise Manager manually every night ;-) However with WLST or ANT the SOA Suite comes with two options to undeploy composites via script. In this article I’d like to explain you both ways. Undeployment with WLST You can test the steps below on Oracle's Pre-built Virtual Machine for SOA Suite and BPM Suite 11g. Change to the WLST directory under MIDDLEWARE_HOME/Oracle_SOA1/common/bin. cd /oracle/fmwhome/Oracle_SOA1/common/bin/ Open WLST ./wlst.sh Connect to the SOA server Read the full article by Danilo Schmiedel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: undeploy soa,Danilo Schmiedel,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to convert Windows filenames (from a checksums.md5) to *nix notation so I can use it on my shell with md5sum?

    - by Somebody still uses you MS-DOS
    I have some checksums.md5 verification files from an ntfs external drive, but using windows notation: \ instead of /, spaces between file names (not escaped), reserved shell characters (like (, &, ', to name a few). The checksums.md5 has a bunch of checksums and filenames: ;Created by program ;2010 f12f75c1f2d1a658dc32ca6ef9ef3ffc *My Windows & Files (2010)\[bak]\testing.wmv 53445e1a0821b790872e60bd7a166887 *My Windows Files' 2 (2012)\[bak]\testing.wmv 53445e1a0821b790872e60bd7a166887 *My Windows Files ˜nicóde (2012)\[bak]\testing.wmv ;Finished I want to use this checksums.md5 to verify the files that I've copied to my machine: but I'm on a Linux, so I need to convert the names inside checksums.md5 from Windows to Linux to use the md5sum utility from the shell. The first line in my example would become: f12f75c1f2d1a658dc32ca6ef9ef3ffc My\ Windows\ \&\ Files\ \(2010\)/\[bak\]/testing.wmv Is there some application for this (converting a file listing, from windows cmd notation, to linux shell notation) or will I need to create a bash script using sed that just "replaces" what is "wrong" with the filenames?

    Read the article

  • curl installation and upgrade

    - by user26202
    On a centos 5.7 machine we had curl 7.15 installed . We also have PHP installed in it as some of the PHP libraries are linked to curl. We wanted to upgrade curl to 7.19 but yum update was failing . Then we manually installed 7.19 with the sources. Now we have two curl versions /usr/bin/curl points to 7.15 /usr/local/bin/curl points to 7.19 And PHP still uses curl 7.15 .How to do delete curl 7.15 without removing the dependency (like PHP and make PHP start using curl 7.19?

    Read the article

  • PHP on IIS7 not showing pages

    - by Jeff
    I have a PHP website on a Windows 7 machine I'm working with and it cannot be viewed by any browser - IE, Chrome, Firefox. When navigating to the root of the website (default index.php) the browser reports it cannot find the address. Not a 404 error from the webserver, just as if it cannot resolve the name. Other websites in the same default web application that are also PHP work perfectly. I've aligned all folder permissions and everything else but this has got me stumped. I even went as far to create a new folder and throw in a test phpinfo() page and it worked. Copied this website's content to the new folder and it cannot find the index.php page. I checked all setting I know and can't seem to find what I'm missing. Anyone else encounter this issue? Remember the fix for it?

    Read the article

  • Windows 7 Firewall configuration

    - by Will Calderwood
    I had a PC set up with a VPN. I used the Windows 7 firewall to block all NON-VPN traffic to the internet, but all LAN traffic was allowed. So, with the VPN connected I could connect to all networked machines and the internet. Without the VPN connected I could only connect to the LAN and had no internet access. Unfortunately my drive failed, and I'm setting up the machine again with a replacement drive. I can't for the life of me work out how to set up the firewall again. I can easily set it up to block all NON-VPN traffic, but can't work out how to that and still allow all LAN traffic whether the VPN is connected or not. Some pointers would be useful. Thanks.

    Read the article

  • PostgreSQL under Mac OSX Lion. Wrong userpass?

    - by Matt
    I'm completely helpless, maybe you guys can help me out. I installed PostgreSQL under my new MacOSX Lion. When I try to connect to my localhost with pgAdminIII.app it says: Error connecting to the database: FATAL password authentication failed for user postgres I just have no idea what to do? Non of my passwords work. Neither my adminpass nor "postgres" nor anyhting else. I tried to install it again via the console where I found this helpful link: http://www.peerassembly.com/2011/08/...resql-on-lion/ However the problem is, that when I try to run createuser -a -d _postgres the same password problem appears again. I just can't seem to find a solution to this. Always wrong password. Btw. I have a new User called "PostgreSQL" on my machine after I installed postgres. Any ideas? I'm so stuck and I really need to make this work.

    Read the article

  • Can not run ifconfig like commands via browser

    - by savruk
    Problem is I cannot run "ifconfig" or similar commands via browser. Environment: Programming language : python Server : lighttpd(CGI) , running on busybox. Well machine is really small and so I am really restricted. Tried techniques: chown every script to root. But there is no differences. Why? Because lighttpd runs under another user, I mean not under root. As it is not root, when I try to run script from browser it always calls the python file with its uid. So it makes it impossible to run "ifconfig eth0 192.168.2.123" like commands via web browser. I get "ifconfig: SIOCSIFADDR: Permission denied" error. What can I do? I do not have any sudoers file, so cannot modify sudo command. Well, I don't even have "sudo" command :) Thanks for your help

    Read the article

  • HP Smart Array; Is it possible to convert Raid1 to Raid0 by dropping a failed drive?

    - by Erik Heppler
    I have a server that was running two 60GB drives as a logical RAID1. At some point the second drive was physically removed and the logical drive has been in "Interim Recovery Mode" for several months now. There's no need for the redundancy of RAID1 on this machine, and I have no intention of replacing the missing drive. If possible I would like to convert the current RAID1 to a single-drive RAID0 by simply dropping the failed drive from the current configuration. I'm only interested in doing this if it can be done in-place. Otherwise I'm perfectly content to leave it in "Interim Recovery Mode" indefinitely.

    Read the article

  • How to update off screen bitmap in a surfaceview thread

    - by DKDiveDude
    I have a Surfaceview thread and an off canvas texture bitmap that is being generated (changed), first row (line), every frame and then copied one position (line) down on regular surfaceview bitmap to make a scrolling effect, and I then continue to draw other things on top of that. Well that is what I really want, however I can't get it to work even though I am creating a separate canvas for off screen bitmap. It is just not scrolling at all. I other words I have a memory bitmap, same size as Surfaceview canvas, which I need to scroll (shift) down one line every frame, and then replace top line with new random texture, and then draw that on regular Surfaceview canvas. Here is what I thought would work; My surfaceChanged where I specify bitmap and canvasses and start thread: @Override public void surfaceCreated(SurfaceHolder holder) { intSurfaceWidth = mSurfaceView.getWidth(); intSurfaceHeight = mSurfaceView.getHeight(); memBitmap = Bitmap.createBitmap(intSurfaceWidth, intSurfaceHeight, Bitmap.Config.ARGB_8888); memCanvas = new Canvas(memCanvas); myThread = new MyThread(holder, this); myThread.setRunning(true); blnPause = false; myThread.start(); } My thread, only showing essential middle running part: @Override public void run() { while (running) { c = null; try { // Lock canvas for drawing c = myHolder.lockCanvas(null); synchronized (mSurfaceHolder) { // First draw off screen bitmap to off screen canvas one line down memCanvas.drawBitmap(memBitmap, 0, 1, null); // Create random one line(row) texture bitmap memTexture = Bitmap.createBitmap(imgTexture, 0, rnd.nextInt(intTextureImageHeight), intSurfaceWidth, 1); // Now add this texture bitmap to top of off screen canvas and hopefully bitmap memCanvas.drawBitmap(textureBitmap, intSurfaceWidth, 0, null); // Draw above updated off screen bitmap to regular canvas, at least I thought it would update (save changes) shifting down and add the texture line to off screen bitmap the off screen canvas was pointing to. c.drawBitmap(memBitmap, 0, 0, null); // Other drawing to canvas comes here } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { myHolder.unlockCanvasAndPost(c); } } } } For my game Tunnel Run. Right now I have a working solution where I instead have an array of bitmaps, size of surface height, that I populate with my random texture and then shift down in a loop for each frame. I get 50 frames per second, but I think I can do better by instead scrolling bitmap.

    Read the article

  • Ubuntu 12.04.3 64 bit with Nemo 2.0.x no thumbnails

    - by Dr. Szrapnel
    I have strange problem with thumbnails in my Ubuntu machine. I was using Ubuntu with Cinnamon 1.8 from stable ppa and it was good but then Cinnamon 2.0 came out with some broken packages uploaded to stable ppa nad things gone wrong... Anyway after few updates Cinnamon started to work normaly except Nemo - there are no thumbnails at all, only icons. I heve tried purging /.cache/thumbnails and .thumbnails folders but this doesn't work. Next I have changed permissions for those folders - that didn't helped either. Then I've set Nemo as default file manager and desktop handler but with no result. What is weird - when I start Nautilus and open some folder with images then close it and open same folder with Nemo thumbnails appears but when I clean thumbnails directories there are no thumbnails again. It would be great if someone have some solution for this annoying Nemo behavior because I really don't want to resign from Cinnamon. p.s. I have set preview options in Nemo for Always and no bigger files than 4GB so that is not the case.

    Read the article

  • How to cache streaming video and silverlight with squid windows reverse proxy

    - by V. Romanov
    We have an intranet web server running a silverlight application (ACTUS media monitor if anyone cares to know). The server is used to record video and stream it to clients through a CDN solution. We want to put a reverse proxy in between the server and CDN provider in order to remove the office network bottleneck that's currently strangling us. I've set up SQUID for windows on a separate machine outside the network using squid BasicAccelerator configuration setting. It seems to work as far as the reverse proxy is concerned, requests are forwarded and the application is working but it doesn't seem to cache anything (no space is used on the drive where squid is installed). I found to explicit setting to turn caching on in squid, so i assume it's on by default. Perhaps I need some other trick to make the video and/or silverlight cacheable? Any help will be appreciated. Any info you need to help me will be provided at once. Thanks in advance!

    Read the article

  • Dell Dimension running Fedora12 does a "Sleeping Beauty" and I am not a "handsome prince"!

    - by Jim Dobbs
    Dell Dimension 2350 with a Pentium IV processor and integrated video and network chips running Fedora12 does a "Sleeping Beauty" and I, apparently, am not am not a "handsome prince"! The system puts video and network to sleep and it will not wakeup. I have heard of this problem on laptops, but this is a tower. Any ideas or help is appreciated. I tried to ping the network card from another system and ping fails. The logs indicate that the system continues to be active. Pressing keyboard short-cut keys makes the disk light blink but neither the video or network card comes alive. Failing all else, are there any Linux commands that I could schedule in cron to pulse video and network adapters hourly that will keep them awake? Or, should I wait on Fedora13? Before this machine, I built a Dimension 2400 with Pentium IV and it had the same problem. Fedora9 on the same hardware is fine.

    Read the article

  • Dual boot Windows 8 and Ubuntu?

    - by askvictor
    I've installed Windows 8 on a machine (lenovo x220 laptop) with Ubuntu 12.10 already installed on another disk. I am guessing that Windows 8 has convinced the laptop to switch to UEFI boot (rather than the BIOS boot that was there previously) as the Lenovo splash screen on startup now no longer has the options to interrupt the boot process (e.g. to choose the boot drive). Previously I had Windows 8 on one drive and Ubuntu on the other drive, so could choose my OS through the BIOS rather than through grub or other bootloader but now no longer have that option. How can I get back the option to boot Ubuntu? I would sort of prefer UEFI boot as it seems much faster than BIOS.

    Read the article

  • Lightweight monitoring for a Windows XP laptop

    - by kazanaki
    Hello I have a windows XP laptop in a remote location. I would like to have an overview for CPU/Memory statistics from a remote location. Monitoring a specific service (a Tomcat instance) would be nice but not essential. I have seen the monitoring solutions (Nagios, cacti e.t.c) and they are all very heavy. I do not want to install mysql, web server and other stuff like that on the laptop. I don't even need a web solution at all. It could just be a simple command line app with a server port and on my machine another GUI application would connect there (and not a web browser) Is there something like this available?

    Read the article

  • How do I install Ubuntu 13.10 from a partition on my Mac?

    - by Barry
    I am trying to install Ubuntu 13.10 on my Macbook Air. I've previously had no issue installing from a USB stick to this machine. However, I don't currently have access to a USB stick or any external media at all! What I've done so far is partitioned my SSD into 3 partitions. One holds OS X, another is a 5gb partition intended for the install ISO, and a third is intended to be the target for that install. The second two partitions are formatted as FAT. I've used dd (with and without bs=1m) to "burn" my ISO to the small 5gb FAT partition. I also at one point tried using hdituil to convert my ISO file to IMG and went through the same process with same result below. After "burning" my ISO to the small partition, I reboot into Refind. Refind sees my small 5gb partition perfectly well, and when I select that partition it loads GRUB appropriately. However, from here, regardless of what I choose, Ubuntu will start to load and then after a few minutes crash out to: BuzyBox V1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell (ash) Enter 'help' for a list of built in commands. (initramfs) unable to find a medium containing a live file system. I've Googled this error and found a number of people encountering it when trying to install from USB, but no solutions seem applicable to my case (installing from a partition on my SSD, to another partition on my SSD). Is there any solution to this, or do I just need to wait a few days until I have access to a USB stick? Many thanks in advance, and apologies for length -- I figured I'd err on the side of being exhaustive rather than having people suggest things I've already tried.

    Read the article

  • USB device is recognized but has no address

    - by SeanMG
    Good day folks, I'm trying to use a USRP1 with GNURadio if anyone knows what any of that is. I am running Ubuntu on a Windows 7 machine via VMware player. When I connect this USRP1 via USB 2.0 drive to Windows 7 it is recognized as Ettus Research LLC USRP1... When I connect the device to Ubuntu through VMware, it shows: usb device fffe:0002 on my removable devices. When I run lsusb I receive the following: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0e0f:0003 VMware, Inc. Virtual Mouse Bus 002 Device 003: ID 0e0f:0002 VMware, Inc. Virtual USB Hub Bus 001 Device 004: ID fffe:0002 When I run this program that comes with the USRP driver... uhd_find_devices I receive: -------------------------------------------------- -- UHD Device 0 -------------------------------------------------- Device Address: type: usrp1 name: serial: 00000000 So when I run this program, it does recognize the fact that this device is connected. However, the device has no address, no name, and has a null serial. I need to know the device address so I can run more programs in GNURadio. Does anyone know what the problem is here? Thanks!

    Read the article

  • What does "cpuid level" mean?

    - by ogzylz
    For an example, I put just output from 1 core of a 16 core machine. What does the output mean by "cpuid level" of "6"? Also, what do "bogomips" of "5992.10" and "clflush size" of "64" mean? processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception: yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment: 128 address sizes: 40 bits physical, 48 bits virtual power management:

    Read the article

  • IIS 7.5 401 -UnAuthorized Access on a Virtual Directory

    - by Jimmy
    I have setup a website in IIS 7.5 on a Windows 2008 machine. The website is sitting on C:/websites/ Then I added a virtual directory called "/uploads" that points to "d:/websites/uploads". This directory holds all the images/media. When I browse the website in browser, I dont see any images etc. When I browse an image directly I notice that it's throwing a 401 error. 401 - Unauthorized: Access is denied due to invalid credentials. I have searched Google quite a lot and I am pretty sure I am have all the permissions setup correctly. Can anyone tell me what I could be doing wrong here?

    Read the article

  • How to netboot ubuntu running iniside VirtualBox on Mac Air

    - by murungu
    Having configured a virtual machine for Ubuntu on VirtualBox on my mac air I need to install Ubuntu OS itself. I have selected the hardrive as the primary boot device and the network as the secondary boot device, so I am not prompted to install an Ubuntu disk at boot time. It attempts to netboot but is unable to locate Ubuntu and cannot find anywhere in the configuration where I can explicitly specify where to find and Ubuntu image, so assume it reverts to some default location and fails. Has anybody out there ever successfully installed ubuntu on virtual box on their Mac Air? What steos did you take to get it right?

    Read the article

  • FTP client that supports 2 concurrent FTP sessions

    - by oninea
    I'm looking for an FTP client that can connect to two different FTP servers at the same time and allow file transfer or synchronization between those two servers. Basically what I want to achieve is to transfer/synchronize files between 2 different sites from my local machine. Are there any clients around that support this functionality? If there are none, is there an alternative to achieve this? I've taken a look at net2ftp, a web based FTP client, which provides almost the same functionality that I need. What I'm looking for though is a desktop app. Any ideas?

    Read the article

< Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >