Search Results

Search found 12222 results on 489 pages for 'initial context'.

Page 294/489 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • What file transfer protocols can be used for PXE booting besides TFTP?

    - by Stefan Lasiewski
    According to ISC's dhcpd manpage: The filename statement filename "filename"; The filename statement can be used to specify the name of the initial boot file which is to be loaded by a client. The filename should be a filename recognizable to whatever file transfer protocol the client can be expected to use to load the file. My questions are: What file transfer protocols, besides tftp, are available to load the file (e.g. What protocols "can be expected to" load the file)? How can I tell? Can I see a list of these protocols? Does my choice of DHCP server influence which file transfer protocols are in use? Pretend I want to use dnsmasq instead of ISC's dhcpd Are these features dependent on the PXE which is in use (e.g. My Intel NICs use an Intel ROM)? I know that some PXE-variants, such as iPXE/gPXE/Etherboot, can also load files over HTTP. However, the PXE rom needs to be replaced with the iPXE image, either by chainloading or by burning the PXE rom onto the NIC. For example, the iPXE Howto "Using ISC dhcpd" says: ISC dhcpd is configured using the file /etc/dhcpd.conf. You can instruct iPXE to boot using the filename directive: filename "pxelinux.0"; or filename "http://boot.ipxe.org/demo/boot.php";

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    I posted this question on SuperUser but it's hardly getting any views, so I thought I'd ask here as well. In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: [\033k]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title? Thanks!

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: \[\033k\]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title?

    Read the article

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • Distributing processing for an application that wasn't designed with that in mind

    - by Tim
    We've got the application at work that just sits and does a whole bunch of iterative processing on some data files to perform some simulations. This is done by an "old" Win32 application that isn't multi-processor aware, so new(ish) computers and workstations are mostly sitting idle running this application. However, since it's installed by a typical Windows Install Shield installer, I can't seem to install and run multiple copies of the application. The work can be split up manually before processing, enabling the work to be distributed across multiple machines, but we still can't take advantage of multiple core CPUs. The results can be joined back together after processing to make a complete simulation. Is there a product out there that would let me "compartmentalize" an installation (or 4) so I can take advantage of a multi-core CPU? I had thought of using MS Softgrid, but I believe that still depends on a remote server to do the heavy lifting (though please correct me if I'm wrong). Furthermore, is there a way I can distribute the workload off the one machine? So an input could be split into 50 chunks, handed out to 50 machines, and worked on? All without really changing the initial application? In a perfect world, I'd get the application to take advantage of a DesktopGrid (BOINC), but like most "mission critical corporate applications", the need is there, but the money is not. Thank you in advance (and sorry if this isn't appropriate for serverfault).

    Read the article

  • Forcing users to change password on first login - Windows Server 2008 R2 Remote Desktop Services

    - by George Durzi
    I'm setting up a demo lab environment in which each demo lab user is assigned 4 accounts to use in the lab. Users access the lab via Remote Desktop to the "client" machine in the lab - exposed at demolab.mydomain.com. The Client machine is a Windows 2008 Server R2 Enterprise Edition server The Remote Desktop Services role is configured on this server Remote Connection settings are configured to allow users to connect with any version of the Remote Desktop Client All accounts are members of the local Administrators and Remote Desktop Users groups All accounts are configured to be forced to change the default password after first login The user is instructed to remote into the lab with an account designated as their main account, and establish 3 more remote desktop sessions within the lab using their 3 other assigned demo lab accounts. When establishing the initial remote desktop connection to the lab using their main account, the user sees the change password dialog as expected. However, after logging in and trying to establish remote desktop connections to the server with their three other accounts, they are prompted that they need to change the password after logging in but can't continue with the login process - they don't see the expected change password experience. After logging in with a primary accounts, it doesn't make a difference if I try establishing a Remote Desktop connection to the environment using the name of the server, e.g. Client, or demolab.mydomain.com. I experimented with changing the settings for Remote Connections to require NLA but that didn't make a different. Appreciate any tips. Thanks

    Read the article

  • OS X superuser folders automatically created. Perusers launchd process appears to kill 501

    - by Ric Pen
    New Apple laptop OSX 10.8.2. I have used OS X but many years previously, and am not familiar with subtleties or changes in com.apple.launchd.peruser.x... I have previously (and in retrospect, foolishly) made changes to these rapidly spawned new peruser accounts (my initial reaction was that if ipfw was disabled, then I might well be under hacker attack, which I have dealt with, years ago), but I believe I was wrong, and the results of my efforts at preserving the system's integrity have in fact been destructive, overreactive, and have resulted in much work to restore. My understanding from other posts is that superuser protocols have changed quite dramatically since I bought the first developer version of OS X many years ago. Haven't developed on Apple much since then, w/ exception of WebObjects (IMO, much underrated at that time, and was more user friendly than ASP (prior to .NET, I vaguely recall). Creation of apparently nasty peruser folders appear to confound 501 process, which logs inability to find firewall (ipfw). Can someone help me with this? I am concerned that either the system is improperly configured, an application was improperly installed (although there is little here beyond Apple's SDK, which I find quite accommodating and intuitive). Still, I am a novice, only sporadically develop at this time, and would really just like to see this system running happily. Please offer assistance, in the form of potential info sources, or if you have had a similar experience, then perhaps scripts to suss out this issue. I do not wish to damage the system, but Apple's Developer connection and discussion threads do not appear to have dealt with this particular issue recently... Although I may well have missed something you have not - please apprise. Any assistance on this issue is very much appreciated - by an old guy, who wants to do some things which were fun about 20 years ago.

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • Connect by Wifi to Sql Server from another computer

    - by Bronzato
    I try to connect by Wifi to Sql Server with Sql Server Management Studio from another computer but it failed. I have a computer with Windows Seven & Sql Server 2008 (lets say the server computer). Next to it, I have a fresh installed computer with Windows Seven & Sql Server Management Studio (let's say the client computer). What I do on the server computer: configure firewall by enabling port 1433 enabled network protocols (TCP/IP) inside Sql Server Configuration Manager checked "Allow remote connections to this server" on server properties in Sql Server Management. started Sql Server Browser restarted services (Sql Server Browser is stopped but I think it is not neccessary, isn't it?) Next, I successfully tested a ping on the port 1433 from my client computer with a tool named tcping (ex: tcping 192.168.1.4 1433). But I still cannot connect from my client computer to Sql Server on my other computer. Ok, something new on this problem: until now, I successfully connected to my "server computer" with Management Studio. What I do is typing the computer name in the server name field in the connection window of Management Studio. My previous (failed) attempt was to type the computer name followed by the instance of sql server (ex: COMPUTER_NAME\SQL2008). I don't know why I only have to type the computer name... Nevermind. Now my new challenge is to succeed connecting my VB6 application to this remote database located on my "computer server". I have a connection string for this but it failed to connect. Here is my connection string: "Provider=SQLOLEDB.1;Password=mypassword;User ID=sa;Initial Catalog=TPB;Data Source=THIERRY-HP\SQL2008" Any idea what's wrong? Thanks

    Read the article

  • Linux Startup Script after Gnome Login

    - by Eric
    I have a Fedora server that I want to spawn an interactive python script after the user logs on. This script will ask the user for various types of information for configuring the system or it will search for the previous config file and show them the predefined information. Originally I was going to put this in rc.local or make it run with init.d but that messed up the boot due to how the script is spawned. So I would like this script to run as soon as the user logs in to Gnome. I've searched around quite a bit and found this answer which appears to be exactly what I want, but it isn't working the way I want it to. Below is my entry. [Desktop Entry] Name=MyScript GenericName=Script for initial configuration Comment=I really want this to work Exec=/usr/local/bin/myscript.sh Terminal=true Type=Application X-GNOME-Autostart-enabled=true Whenever I login, nothing happens. So I then did a test to modified "myscript.sh" to just echo some text to a file and it worked fine. So it appears the portion that isn't working is the script popping open a terminal and waiting for the users input. Are there any additional options I need to add to make this work? I can confirm when I run /usr/local/bin/myscript.sh from the CLI it works fine. I have also tried adding "StartupNotify=true" and still no luck. Edit @John - I tried moving my Exec= to /usr/local/bin/myscript-test and this is what myscript-test contains. #!/bin/bash xterm -e /usr/local/bin/myscript.sh Yet again, when I just run the myscript-test it works fine. However when I put that in my autostart, nothing happens. Edit 2 - I did a few more tests and it did start working but I had to remove Terminal=True before the xterm would pop. Thanks for your help.

    Read the article

  • What are the "least legally restrictive" well-connected countries to host a website?

    - by monster
    NB: I am aware that this question is subjective, as it can't be defined precisely, but the answers should still be "objective": Country name, and what makes it legally safer. EDIT: A) I am located in Germany. B) I am NOT looking for a place to offer pirated Software/Media; no binary on my site, except "profile icon". Hello! I want to start publishing "social" websites / apps, and I found that the biggest initial problem is this: Any and all services I have to depend on, including Domain Registrar, DNS provider, Server/Cloud Provider, CDN Provider, ... even my Insurance Agent, basically say that they can "throw me out" if my website contains "unacceptable" content. It's always phrased in such a way that basically anything can fall under "unacceptable" content. This is very frustrating because you just can't fully control what users post on your "social website", and you so you basically have to expect when you go to bed that your site is going to be gone when you wake up. I've heard a lot of horror stories about this. Since the "Terms Of Service" of all those providers are foremost to protect themselves from legal actions, and those legal actions depend on the country where they are located, it seems like the first step is to find which country is the "safest" to locate a site. "Safest" being defined as, where I am least likely to get in legal trouble with the local authorities, if some user posts something unacceptable in some way. The main restriction is that it should also be a "well-connected" country, because there is no point in being "safe", if my users can't get to my sites, or the latency is unacceptable. I am targeting the English speaking people in any country as my future users.

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • Ubuntu 12.10, Unity, AMD 12.11 beta drivers, AMD APP SDK 2.7 and OpenCL detection of multiple gpus

    - by junkie
    I'm using Ubuntu 12.10, AMD 12.11 beta drivers, AMD APP SDK 2.7 and OpenCL. I have three amd radeon 7990s plugged in each of which are a dual 7970 so I have six gpus altogether. I plan to go up to eight in a few days. Windows couldn't use even 4 but linux works fine with 6 so far. The strange thing is that the six gpus are only detected by OpenCL in unity (the ubuntu default window manager). If I switch to e17, blackbox or fluxbox or anything else for that matter OpenCL only detects one. I'm using a simple OpenCL program to list all devices to check. I've also checked the output of aticonfig --list-adapters, fglxinfo and clinfo. The first two always show six in all window managers wheras clinfo shows 6 in unity but 1 gpu in all other WMs. I'm also using an X config generated by aticonfig --initial -f --adapter=all. I'm also only using one monitor. I've also checked using lsmod that the fglrx module is loaded in all WMs. So I have two questions. Why does OpenCL see six gpus only in unity? How can I enable six gpus on other lightweight WMs? Basically I'm getting at what determines how many gpus the OpenCL runtime sees? Thanks.

    Read the article

  • Gifsicle: How to set it to not overwrite the original GIF file if the resulting modified GIF file is larger than the original?

    - by galacticninja
    About Gifsicle: Gifsicle is a command-line tool for creating, editing, and getting information about GIF images and animations. One of its features is (from its website): Optimize your animations! This stores only the changed portion of each frame, and can radically shrink your GIFs. You can also use transparency to make them even smaller. Gifsicle’s optimizer is pretty powerful, and usually reduces animations to within a couple bytes of the best commercial optimizers. I call Gifsicle through this .BAT file in the Right Click - 'Send to' Menu: @echo off :compressFile "C:\Programs\Compression Scripts\gifsicle\bin\gifsicle.exe" --batch -V -O3 %1% echo. echo. SHIFT if exist %1% goto compressFile PAUSE This animated GIF file, however: http://i.minus.com/i7WdodY5Zwot3.gif, when its compression is optimized with Gifsicle with the above commands, results in a larger-filesized GIF file. Gifsicle overwrites the original GIF file with the resulting larger-filesized GIF file. Initial filesize: 7.57 MiB (7,942,886 bytes). After running through the above commands with Gifsicle: 7.64 MiB (8,017,622 bytes). Is there a way to prevent Gifsicle from overwriting the original file if its output file is larger than the original file, while still overwriting the original file if the output file is smaller? Details: OS: Windows 7 Gifsicle version: 1.63, from the binary provided here: http://www.lcdf.org/gifsicle/ Gifsicle manual

    Read the article

  • How to reject messages to unknown user in sendmail cooperating with MS-Exchange?

    - by user71061
    Hi! I have an MS Exchange 2003 configured as a mail server for an organization. As this server is located in this organization internal network and I don't want to expose it directly over internet, I have second server - linux box with sendmail - configured as intelligent relay (it accept all messages from internet addressed to @my_domain, and forward it to internal Exchange serwer, and accepts all messages from this internal Exchange server and forward it over internet). This configuration work's fine, but I want to eliminate messages addressed to not exiting users as early as possible. Good solution could be Enabling on Exchange server function of filtering recipients together with "tar pitting", but in my case this dosn't solve problem, because before any message reach my Exchange server (which could eventually reject it), it has to be already accepted by sendmail server, sitting in front of this Exchange server. So, I want to configure my sendmail server in such a way, that during initial SMTP conversation it could query somehow my Exchange server checking whether recipient address is valid or not, and based on result of this query, accept or reject (possibly with some delay) incoming message in a very early phase. In fact, I have already solved this issue by writing my own, simple sendmail milter program which checks recipient address against text file with list of valid addresses. But this solution is not satisfying me any longer, because it requires frequent updates of this file, and due to lack of time/motivation/programming skills, I don't want to cope further with my source code, adding to it functionality of querying my Exchange server. Maybe I can achieve desired effect by configuring any component of already available linux software. Any ideas?

    Read the article

  • How to create an alias for a named SQL Server instance

    - by Svish
    On my developer computer I have an SQL Server instance named *developer_2005*. In the resource setting files of a C# application we are creating, the instance name is set to foobar (not really, but just as an example). So when I run the application (in debug or realease) it tries to connect to an SQL Server on localhost, named foobar. I am wondering if it is possible to create an alias or something like that, so that the application actually finds an SQL Server on localhost named foobar, but it is actually connecting to the instance named *developer_2005*. The connection string in the config file of the application is Data Source=localhost\foobar;Initial Catalog=barfoo;Integrated Security=True with provider name System.Data.SqlClient. If I change localhost\foobar to *localhost\developer_2005* then the application can connect like it should. How can I create an alias so that I won't have to change the string in the file? I tried, in SQL Server Management Studio, to create a Server Registration with registered server name "localhost\developer", but this didn't seem to do any good. Not even sure what that really did... But then I discovered SQL Server Configuration Manager\SQL Native Client COnfiguration\Aliases. And I kind of assume this is where the solution lies. But I can't quite figure out how to add a new one... When creating a new one, I have to provide Alias Name, Port No, Protocol and Server, and I don't really have a clue what to put in either of them.

    Read the article

  • Preventing applications from performing run once tasks for multiple users

    - by JohnyV
    In our environment we have several applications that are installed that have a need to run a little prompt the first time they run eg Media player, Google earth etc. The problem is we have many users on many different computers. And the computers have deepfreeze running on them which removes the users profile once the computer is restarted. So next time that user logs in they have to go through the whole thing of run once again. I have managed to prevent IE runonce using group policy and office run once from using the office customisation tool. Is there a way to make this happen for other applications. On windows xp we used to copy a user that has run all the apps and place their default profile into the default profile so that new users get that profile template. Now with windows 7 the process of copy profiles is not as easy. Is there an easy way to copy profiles in win 7 or is there a better way (eg modify reg or app data) to prevent apps from performing an initial run. Thanks

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >