Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 623/854 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • mount error 5 = Input/output error

    - by alharaka
    I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating. This the command I am using. debianvm:/home/me# whoami root debianvm:/home/me# smbclient --version Version 3.2.5 debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*********mount error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) debianvm:/home/me# The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs and mount -t cifs, along with smbmount and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8, file_mode, and dir_mode as described here. That did not help either. I have also tried ntlm and ntlmv2 assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2 it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username correctly lists all the shares and shows it as the following. Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager] Sharename Type Comment --------- ---- ------- IPC$ IPC Remote IPC ETC$ Disk Remote Administration C$ Disk Remote Administration Share Disk Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED) NetBIOS over TCP disabled -- no workgroup available I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.

    Read the article

  • How can one make vim change terminal colors?

    - by amn
    I am using command line vim running from an xterm (which runs sh). I have color in vim according to a color scheme I like. The problem is, as usual with 256-color terminals and truecolor color schemes, colors are wrong. Now, I know I can do a gazillion things to fix this, including installing gvim, but I like my terminal. In fact, using xrdb [-merge] .Xresource file, I now actually have xterm override the color values, and the theme now looks perfect. Since, I may be switching to another theme, I need some workflow to have vim actually do what xrdb does - to reset terminal color pallette. Because right now I have to reset color values with xrdb ... first, then launch another xterm to actually use these values, then launch vim from that newly opened xterm to have the exact colors. The way I understood it is that vim color scheme, just as any other terminal application, uses colors by referencing their ids, and X resources set the values themselves. I think I saw somewhere on Internet, that terminal control character sequences can reset actual color values, in fact, I am sure they can - I managed to set my terminal background color at runtime. How would I make vim execute these sequences to match values for the color scheme? And is there any reference to these control sequences, as part of any standard?

    Read the article

  • How can I cause Task Scheduler to "fail" if a dialog box returns a certain result?

    - by Roger
    I'm working on a VBScript to do a weekly reboot of all machines on our network. I want to run this script via Task Scheduler. The script runs at 3:00 AM, but there is a small chance that users may still be on the network at that time, and I need to give them the option to terminate the reboot. If they do so, I would like the reboot to occur the next night at 3:00 AM. I've set Task Scheduler up to repeat in this way. So far, so good. The problem is that if the user selects "Cancel" in my script, the Task Scheduler does not see my task as failed, and won't run it again the next night. Any ideas? Can I pass an errorcode to task scheduler or otherwise abort the task via VBScript? My code is below: Option Explicit Dim objShell, intShutdown Dim strShutdown, strAbort ' -r = restart, -t 600 = 10 minutes, -f = force programs to close strShutdown = "shutdown.exe -r -t 600 -f" set objShell = CreateObject("WScript.Shell") objShell.Run strShutdown, 0, false 'go to sleep so message box appears on top WScript.Sleep 100 ' Input Box to abort shutdown intShutdown = (MsgBox("Computer will restart in 10 minutes. Do you want to cancel computer restart?",vbYesNo+vbExclamation+vbApplicationModal,"Cancel Restart")) If intShutdown = vbYes Then ' Abort Shutdown strAbort = "shutdown.exe -a" set objShell = CreateObject("WScript.Shell") objShell.Run strAbort, 0, false End if Wscript.Quit Appreciate any thoughts.

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by blade
    Hi, I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • RAM ok in memtest86+ == RAM ok after wake from sleep?

    - by twon33
    I have a Windows XP (32-bit) system that appears stable in normal operation, but was repeatably freezing (hard lock, no BSOD) a minute or so after waking from S3 sleep. Some Googling against the motherboard model and memory manufacturer suggested that I might need to bump up the memory voltage, so I tried it and it now seems to resume without freezing. However, I don't really trust it and I'd like to validate that it's actually stable, especially after resuming from sleep. I've run Prime95 for a few hours with no issues, and am planning an overnight run of Memtest86+, which I expect to pass because the system has been solid whenever I've run it without putting it to sleep. Does something like Memtest86+ exist that actually invokes S3 sleep during operation? Clearly it would need an operator to wake the computer to resume testing, but I don't think I've ever heard of a memory test tool that can do this. Alternately, am I wasting my time? Should a clean bill of health from Memtest86+ indicate stability regardless of whether sleep is involved, or, conversely, does my original problem indicate that Memtest86+ would have failed eventually with the stock voltage if I'd run it, sleep or not?

    Read the article

  • virtualbox instances dedicated-server with custom dnsmasq

    - by ovanes
    I have dedicated server where I planned to run virtualbox virtual machines. Since the VMs are managed with vagrant/chef I may end up with many different ones. I thought it would be a great idea to deploy a dnsmasq on the server, which is going to dynamically assign the ip addresses to the VMs. Since each Vagrant/Chef recipe is configured to set the VM's host name I can find/reference the appropriate VM by the host name. Finally, the entire infrastructure is not directly accessible via internet, so the dedicated Server is the OpenVPN host. So the entire infrastructure may be seen as: +-------------------------------------+ | Dedicated Server | | | | +-------------+ +------------+ | +------------------+ | | DNSMasq | | OpenVPN |<==========>| Client | | +-------------+ +------------+ | | | | ^ ^ | +------------------+ | | | | | +--+ | | | | +-------+ | | | | VM1 | | | | +-------+ | | | ... | | | +-------+ | | +-| VM2 | | | +-------+ | +-------------------------------------+ Now some questions which I am struggling with: Are there any other suggestions to access private infrastructure, because I don't want to reinvent the wheel. On the Dedicated Server I don't see the vboxnet0 interface but VirtualBox is installed without GUI. Accessing of virtual boxes via ssh works fine. Did I miss smth? DNSMasq must serve the local VMs only, otherwise there is a chance that local DNSMasq start to serve other server's on the network, what I don't want. Because I don't see vboxnet0 I tend to use no-dhcp-interface=eth0 config option. Are there any thoughts on that despite, the fact that a second NW-card (which is not the case), might start serving DHCP-Requests? How should I config the VM's network interface that I am able to access it via OpenVPN and resolve the hostnames using the DNSMasq. I think it should be the host-only network card. Should I do bridging in the OpenVPN config or is it sufficient to use routing.

    Read the article

  • setting up phpmyadmin with nginx within ubuntu 11.04

    - by Patrick
    I have nginx and php5-fpm running on ubuntu 11.04. I have installed phpmyadmin but im having trouble accessing it. I would like to access it via http://localhost/phpmyadmin I've used all the default locations for the nginx, php5, and phpmyadmin installs. I'm being directed to use the block below by the blog guide im following, but im not sure what to change to get it to point how im wanting it to. server { listen 80; server_name php.example.com; // <-I know i need to edit this, but not sure to what. access_log /var/log/nginx/localhost.access.log; root /usr/share/phpmyadmin; index index.php; location / { try_files $uri $uri/ @phpmyadmin; } location @phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_NAME /index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • I/O Error on LG GSA-H12N DVD drive on Windows 7

    - by Ashwin
    I am facing an I/O Error when I try to burn DVD data discs on my LG GSA-H12N DVD drive on Windows 7. Note that I was able to do this same operation on the same hardware/software just a day earlier without any problems, but with Windows XP. The only change (AFAIK) has been the installation of Windows 7 to replace Windows XP on this PC. Here is the error I get when I try to burn a DVD data disc using CDBurnerXP 4.2.7.1801: Burning error occured An error occured while burning the disc. Most likely the disc is not usable. Usually these errors happen if the inserted media is not compatible to the drive or of poor quality. (devNTSPTI_IO_Error) Could not write to Disc (LBA: 52864 Length: 32). SCSI Pass-through Interface I/O Error. - 0xFF045D Note that there can be no problem with the discs since I have been using the same discs (from the same box) just before the Windows 7 installation with no problems. The only change has been Windows 7. I tried InfraRecorder v0.5 and ImgBurn v2.5 and got similar I/O errors: Note that Windows 7 lists the LG GSA-H21N drive as being compatible (see this link). I also checked the LG Drivers website and using the firware update from there updated the drive firmware from version UL01 to UL02. But, even this has not helped. The drive reads DVDs without any problem, but continues to produce coasters. Could someone help me figure out what is the problem? Thanks :)

    Read the article

  • VPN Setup: Mac OS X and SonicWall

    - by noloader
    I'm trying to get VPN access up and running. The company has a SonicWall firewall/concentrator and I'm working on a Mac. I'm not sure of the SonicWall's hardware or software level. My MacBook Pro is OS X 10.8, x64, fully patched. The Mac Networking applet claims the remote server is not responding. The connection attempt subsequently fails: This is utter bullshit, as a Wireshark trace shows the Protected Mode negotiation, and then the fallback to Quick Mode: I have two questions (1) does Mac OS X VPN work in real life? (2) Are there any trustworthy (non-Apple) tools to test and diagnose the connection problem (Wireshark is a cannon and I have to interpret the results)? And a third question (off topic): what is broken in Cupertino such that so much broken software gets past their QA department? EDIT (12/14/2012): The network guy sent me "VPN Configuration Guide" (Equinox document SonicOS_Standard-6-EN). It seems an IPSec VPN now requires a Firewall Unique Identifier. Just to be sure, I revisited RFC 2409, where Main Mode, Aggressive Mode, and Quick Mode are discussed. I cannot find a reference to Firewall Unique Identifier. I think I am screwed here: I am trying to connect to a broken (non-standard) firewall, with a broken Mac OS X client. Fortunately, I can purchase VPN Tracker Personal (a {SonicWall|Equinox}-authored client) for $129US from Equinox. So much for standards....

    Read the article

  • Windows : Map-a-network-drive to a remote Shared-Folder (on QNAP NAS) using OpenVPN

    - by spelltox
    Provided my lack of networking knowledge, I've been struggling with this issue for quite a few days now : I have a QNAP-TS212 NAS on which i've created a shared-folder (mostly excel files). All the computers in the local network (windows) are able to access it without any problem. Now, i want to access that shared-folder remotely (windows client), so : I enabled OpenVPN (and PPTP) in QNAP admin. Installed OpenVPN on the remote client. Applied the configuration file that the QNAP generated - Configuration (openvpn.ovpn) : client dev tun script-security 3 proto udp remote ***MY_WAN_IP*** 1194 resolv-retry infinite nobind ca ca.crt auth-user-pass reneg-sec 0 cipher AES-128-CBC comp-lzo OpenVPN connect successfully from the remote client. Now, here's my problem : I can ping the NAS (got IP 10.8.0.1) from the remote client, But when i try to map-a-network-drive, i don't see the shared folder or the NAS or any of the other computers in the network... I checked - all computers are in "WORKGROUP" workgroup. I'm probably missing some basic knowledge, So - any help would be greatly appreciated ! Many thanks.

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • "Security Warning" comes up when I run via another program

    - by Alexander Bird
    If I execute vmmap from the command line it works fine. However, if I call some other program and pass vmmap as a paramater for this other program to start the execution, then I get this "security error" popup -- which makes it hard to automate scripts. In other words, I want to wrap vmmap via another program. In my case, I want to wrap vmmap via another program because whenever vmmap runs, it will bring a window up momentarily and then disappear. So I try passing vmmap as an argument to another program which will start the program "headlessly". I tried this program and this program, and in both cases I get the same popup which defeats the purpose of automation. Why does this happen when the program isn't run directly? Does anyone know the internals of what this warning is? And, utlimately, is there a way to stop this from happening, but only for this instance? I don't want to disable this warning-system on my whole computer. EDIT: I am using Windows Server 2003, and I don't necessarily need solutions for other platforms, but I would like to know what they are if they are platform-dependent solutions.

    Read the article

  • Excel concatenate strings from cells listed in third cell

    - by Puddingfox
    I have an excel 2007 workbook that has five columns: A. A list of machines B. A list of service numbers for each machine C. A list of service names for each machine ...(nothing here) I. A list of Service Numbers J. A list of Service Names Each machine listed in column A has one or more services running on it from the list in column J. I would like to be able to add services to a machine (i.e. updating the cell in Column C) by simply adding another comma-separated number to Column B. For Example, The first row would look like this assuming Machine1 has the first three services: | A | B | C | Machine1 | 1,2,3 | HTTP,HTTPS,DNS Right now I have to manually update the formula in column c for each change I make. The current formula is: =CONCATENATE(J1,",",J2,",",J3) I would like to use something like this (please forgive my syntax; I'm a coder and I'm treating cell B1 as if it is an indexed array): =CONCATENATE(CELL("J"+B1[0] , "," , "J"+B1[1] , "," "J"+B1[2]) Although having variable numbers of services makes this even more difficult. Is there any way of doing this. For reference, this is columns I and J: | I | J | 1 |HTTP | 2 |HTTPS | 3 |DNS ..... | 16 |Service16 I don't know very much about Excel so any help is greatly appreciated.

    Read the article

  • Wildcard DNS entry to match lang subdomain

    - by Adam Benayoun
    Hey, We have a website www.example.com pointing to x.x.x.1 and a system with multiples minisites all having subdomains.examples.com pointing to x.x.x.2 Basically what we have in place is a wildcard DNS entry who could basically match any possible subdomain, once reaching x.x.x.2, the apache vhost would intercept and basically redirect it to a php script who in turn would know what minisites to serve. On www.example.com however, we server contents which are translates in several languages, until few weeks ago you could switch languages by clicking on a flag and you'd be served with the translated content. The only problem is that the URL wouldn't change and SEO wise this isn't the best solution. Now I cannot change the way subdomain are handled (being redirected to x.x.x.2) since we have hundreds, if not thousands of minisites live. I have to come up with a solution to have language.example.com redirecting to x.x.x.1 and then a rewrite rule who would basically rewrite the fake subdomain into a URL in order to pass the parameter of the language to example.com On solution is to list all possible language as DNS entries right before the wildcard DNS entry. The other solution which I am almost sure is not feasible is to have some kind of regex in a DNS entry matching all subdomain with 2 letters ( en|es|fr|cn|cl etc... ) Any ideas?

    Read the article

  • Firefox not displaying icons in KhanAcademy

    - by ADTC
    If you don't know what Khan Academy is, check it out. It's awesome. (For testing purpose you may view any video on the website.) My problem -- it's a minor problem, but annoying -- is that in Firefox (Windows 7), the icons below the video are shown as boxes with hex codes in them. This means the icons come from some font that isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I checked out the source and found that the font in question is called "FontAwesome". I found this question in S.O. that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. Also, would manually downloading the font and adding it to Windows' Fonts folder help? I tried this with the TTF font, and it does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • OpenVPN Keeps Crashing

    - by Frank Thornton
    Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28523 [vpntest] Peer Connection Initiated with [AF_INET]<MY_IP>:28523 Oct 20 21:00:44 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 MULTI_sva: pool returned IPv4=10.8.0.6, IPv6=(Not enabled) Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1576', remote='link-mtu 1376' Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1332' Oct 20 21:00:45 sb1 openvpn[2082]: <MY_IP>:28522 [vpntest2] Peer Connection Initiated with [AF_INET]<MY_IP>:28522 Oct 20 21:00:45 sb1 openvpn[2082]: vpntest2/<MY_IP>:28522 MULTI_sva: pool returned IPv4=10.8.0.10, IPv6=(Not enabled) Oct 20 21:00:46 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 send_push_reply(): safe_cap=940 Client File: client dev tun proto tcp remote <IP> 443 resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1410 persist-key persist-tun auth-user-pass comp-lzo SERVER: port 443 #- port proto tcp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 reneg-sec 0 #mtu-disc yes mssfix 1410 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /etc/openvpn/openvpn-auth-pam.so /etc/pam.d/login #plugin /usr/share/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Comment this line if you are using FreeRADIUS #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment this line if you are using FreeRADIUS client-to-client client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 3 30 comp-lzo persist-key persist-tun What is causing the VPN to keep dropping the connection and then reconnecting?

    Read the article

  • Problems compiling coreutils-8.5 on Solaris 5.10 on Intel platform

    - by PP
    I am having trouble compiling coreutils-8.5 on Solaris 5.10 on the Intel platform using cc. Firstly I had the following error during ./configure: checking whether <wchar.h> uses 'inline' correctly... no configure: error: <wchar.h> cannot be used with this compiler (/tool/sunstudio12.1/bin/cc -xc99=all -g -D_REENTRANT). This seemed similar to the problem in this question. The solution was to edit configure and replace the reference of -xc99=all to -xc99=all,no_lib. This permitted the configure to complete. Then I ran /usr/sfw/bin/gmake and it progressed until I received the following message: Making all in src gmake[2]: Entering directory `/home/peterp/src/coreutils-8.5/src' gmake all-am gmake[3]: Entering directory `/home/peterp/src/coreutils-8.5/src' CCLD chroot Undefined first referenced symbol in file eaccess ../lib/libcoreutils.a(euidaccess.o) ld: fatal: Symbol referencing errors. No output written to chroot What could cause this problem? PS I was only compiling coreutils because I wanted colour ls.

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >