Search Results

Search found 23079 results on 924 pages for 'local variables'.

Page 564/924 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Send mail from a distrobution groups email address

    - by Campo
    A user has send permission on a distro group on a WINDOWS SERVER 2003 domain. I am the admin. When either of us send email using the distrobution groups email adress we get a non delivery report Your message did not reach some or all of the intended recipients. Subject: TEST Sent: 4/19/2010 4:46 PM The following recipient(s) cannot be reached: [email protected] on 4/19/2010 4:46 PM You do not have permission to send to this recipient. For assistance, contact your system administrator. MSEXCH:MSExchangeIS:/DC=local/DC=DOMAIN:SERVERNAME Thanks, JC

    Read the article

  • Can connect to DNS addresses typed in the URL but not by IP addresses

    - by Ben
    I just changed over my modem to bridged mode, and changed my wireless router to PPPoE. My PC IP address is reserved and forwards port 80 to my computer's IP address based on my MAC address. I have a problem, however. I cannot access my local webserver by public IP address or my router 192.168.0.1 wirelessly from any other computer or iPad. I can, however, connect by this PC which is connected to the wireless router via ethernet. Via wireless, it says it cannot connect, however DNS addresses work (e.g. google.com, etc.) Any ideas?

    Read the article

  • Surgemail DNS lookup failure

    - by Spencer Ruport
    Just curious if anyone has any experience with Surgemail. I've set it up a couple times and never had an issue but my latest install keeps leaving outgoing messages in the queue with the error "DNS Lookup Failed". I double checked that the local DNS server is running and even tried switching the IPs to my ISP's DNS servers but still no go. [DNS] Ok(avge) Bad(avge) 76.227.63.137: 0(0.0s) 5(31.0s) 76.227.63.254: 0(0.0s) 1(0.0s) Anyone have any ideas why this might be happening?? Thanks.

    Read the article

  • Newly registered domain name still doesn't show up after 72 hours.

    - by BioGeek
    Seven days ago I ordered a domain name with a local (Belgian) domain name agent. I have already webspace at a shared host in the US, so I filled in their nameservers on the form. I immediately payed with my credit card. Three days ago I received an e-mail from the domain name agent, saying that my domain name was registered with the external nameservers I provided, and that the site would be visible within 24 hours. However, 72 hours after that mail I still can't see my domain name. A whois search shows indeed that my domain is registered on my name,but a ping to the domain returns unknown host and a traceroute gives the similar Name or service not known. What can have gone wrong, and which (Linux) commands can I use to find out. Or should I just be patient and will the domain name eventually be propagated?

    Read the article

  • One eye on my dinner and one eye on SQL server

    - by fatherjack
    LiveJournal Tags: RedGate,Work Life Balance,Tips and Tricks,SQL Server This is somewhere between a Tweet and a proper blog article - would that be a Bleet? Anyway, I was at a local restaurant yesterday and after placing my order I was thinking about having to get home and log in to check some SQL Servers and then the thought came to me that as we were near civilisation there was likely to be a 3G signal that might actually make using the web browser on my phone bearable. It was surprisingly fast on my HTC Desire, it was almost as good as Wi-Fi. RedGate SQL Monitor works fine on the default HTC browser and here is the proof, me checking the servers while I am waiting for the meal to arrive. Everything checked out OK so I had the evening free from SQL Server. You can get a free 14 day full trial of a SQL Monitor from RedGate here or find out more about it at The Future of Monitoring. Disclosure: I am a friend of RedGate and as such regularly make positive comments about their products. I don't get paid for it but I do get free licenses for testing and reviewing purposes.

    Read the article

  • Is there a serious issue with setting the SUID bit on tcpdump?

    - by Dean
    I'm running tcpdump on a remote machine, and piping the output to Wireshark on my local machine over SSH. In order to do this, I had to set the SUID bit on tcpdump. For background, the remote machine is an Amazon EC2 running "Amazon Linux AMI 2012.09". On this image, there is no root password, and it is not possible to log in as root. You can't use sudo without a TTY, and therefore you have to set the SUID. What are the practical risks of setting this bit on tcpdump? Is there any need to be paranoid? Should I unset it whenever I'm not capturing?

    Read the article

  • Deleted items on Deleted Items folder are not shown

    - by Ken
    When I run this cmdlet, I get the following result: [PS] C:\Windows\system32Get-MailboxFolderStatistics user | ft FolderPath, FolderSize -autosize FolderPath FolderSize ---------- ---------- /Top of Information Store 156 B (156 bytes) /Calendar 244.2 KB (250,025 bytes) /Contacts 1.223 MB (1,282,252 bytes) /Contacts/SenderPhotoContacts 30.41 KB (31,139 bytes) /Conversation Action Settings 0 B (0 bytes) /Conversation History 206.2 KB (211,147 bytes) /Deleted Items 1.449 MB (1,519,602 bytes) /Drafts 472 B (472 bytes) /Inbox 618 MB (648,025,798 bytes) /Journal 144 B (144 bytes) /Junk E-Mail 131.9 KB (135,089 bytes) /News Feed 0 B (0 bytes) /Notes 1.847 KB (1,891 bytes) /Outbox 0 B (0 bytes) /Quick Step Settings 0 B (0 bytes) /RSS Feeds 0 B (0 bytes) /Sent Items 6.754 KB (6,916 bytes) /Suggested Contacts 9.316 KB (9,540 bytes) /Sync Issues 0 B (0 bytes) /Sync Issues/Conflicts 0 B (0 bytes) /Sync Issues/Local Failures 0 B (0 bytes) /Sync Issues/Server Failures 0 B (0 bytes) /Tasks 7.994 KB (8,186 bytes) /Recoverable Items 12.16 MB (12,748,519 bytes) /Deletions 0 B (0 bytes) /Purges 0 B (0 bytes) /Versions 0 B (0 bytes) But when I open the mailbox using both Outlook and OWA, the deleted items folder is empty. I'm guessing it's corrupted or something like that. Is it possible to recover it somehow? Thanks.

    Read the article

  • IIS 7.5 401.3 Access Denied

    - by Jeffrey
    I am having this weird issue with IIS 7.5 on Windows 2008 R2 x64. I created a site in IIS and manually created a test file index.html and everything worked. When I try to do a deployment, I copy all the files from my local PC to the IIS server, try to access index.html (this is the proper deployed file) and getting 401.3 access denied error. I then try to manually recreate index.html and copy content into this newly created file and the page is accessible again... I just can't figure this out. So the issue is that IIS 7.5 can't server files that have been copied from other PCs. I tried to reset/apply permission settings to the copied folders/files but nothing has worked. Please help. Thanks! By the way, the files that I copied are just some html cutups i.e. generic html, css and image files, nothing special.

    Read the article

  • Proxification rulte for System process

    - by kseen
    I'm trying to configure Microsoft Visual Studio 2010 remote debugging and ran into issue: while connecting to remote computer running MSVSMON, client computer sends SYN request for connection. It makes it under the System process (as I see it in TCPView). As every network apps should be configured to use proxy in our network, I'm trying to add devenv.exe to proxification rules to make its traffic goes thru LAN's proxy server. It doesn't help. So my question is how can I make that low-level-system traffic will go through local area network proxy server?

    Read the article

  • How can I restore VM on a new Hyper-V server?

    - by jaloplo
    I was working gladly with my VM on my local Hyper-V server. But, after installing some updates on the host the system only show the famous blue screen. I couldn't start my host so I reinstalled it and configured as new Hyper-V server. My VM was in a another disk to prevent this happening but I don't know how to add it as a new VM on new server. In addition, this VM has various snapshots so, how can I add this VM to my new Hyper-V server? UPDATE: I can't do Export/Import because my server crashed before I can't do it.

    Read the article

  • Jenkins swarm-plugin jar file, won't run in background

    - by JeanMertz
    We're working on an automation script for our Jenkins slaves on a local Unix server. To connect the slaves to the Jenkins master, we use the swarm plugin. Setting up the master was easy, and connecting clients is also easy with a single command. However, I am trying to get the slave command (a java application) to run in the background without stalling the current process, this doesn't seem to work. I've created an init.d file and added it to update-rc.d but that doesn't work. #!/bin/bash /usr/bin/java -jar /root/swarm-client-1.7-jar-with-dependencies.jar -executors 4 I've also tried to run it with an ampersand & to start the process in the background, but that doesn't work either because - from looking at the source - the jar file actually boots another process that is then started in the foreground. Any ideas on how to make this jar file start without stopping the bootstrap script?

    Read the article

  • accessing a web server from the LAN and WAN

    - by jessh
    My router does not support loopback. In order to view a webpage on my server, I either have to type in the local ip (192.168.1.201), or be on another network. What are my options for making this easier? Here are some possible things: Route all web traffic through an external proxy (seems to be overkill) Run my own DNS server (where to start?!) Buy a new router that supports loopback. Surely there is another way that I can use my laptop on the LAN and the WAN by typing in my domain more easily than these solutions.

    Read the article

  • apt-mirror - Changing source mirror creates new folder for downloads

    - by I Kazi
    I have a local Ubuntu mirror running on ubuntu 10.04 in my office which uses archive.ubuntu.com to download updates and releases. I have been running this mirror since Ubuntu's Hardy Heron release. It downloads everything under /export/ubuntu-repo1/apt-mirror/mirror/archive.ubuntu.com/ folder. Recently I came to know that the mirror in India i.e. in.archive.ubuntu.com is a lot faster for me than http://archive.ubuntu.com which is based in UK. Therefore to download latest release QUANTAL QUETZAL I configured Indian mirror in /etc/apt/mirror.list After making this change and leaving apt-mirror to run overnight I found that it downloaded everything to a new folder called "in.archive.ubuntu.com" so now I have two folders where apt-mirror downloads updates. /export/ubuntu-repo1/apt-mirror/mirror/archive.ubuntu.com/ /export/ubuntu-repo1/apt-mirror/mirror/in.archive.ubuntu.com/ Now, since apache does not have "in.archive.ubuntu.com" configured, Ubuntu clients are unable to access QUANTAL QUETZAL release and its updates. My question is: Is there a way I could copy everything downloaded under "in.archive.ubuntu.com" to "archive.ubuntu.com" so all new updates of the latest release become accessible to Ubuntu clients? Secondly, Can I configure apt-mirror to download everything to archive.ubuntu.com even using Indian mirror? Thanks a lot for your help in advance. I Kazi

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Setting up Google Analytics for multiple subdomains

    - by Andrew G. Johnson
    so first here's a snippet of my current Analytics javascript: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-30490730-1']); _gaq.push(['_setDomainName', '.apartmentjunkie.com']); _gaq.push(['_setSiteSpeedSampleRate', 100]); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(ga, s); })(); So if you wanna have a quick peak at the site the url is ApartmentJunkie.com, keep in mind the site is pretty bare bones but you'll get the idea -- basically it's very similar to craigslist in the sense that it's in the local space so people pick a city then get sent to a subdomain that is specific for that city, e.g. winnipeg.mb.apartmentjunkie.com. I put that up late last night then had a look at the analytics and found that I am seeing only the request uri portion of the URLs in analytics as I would with any other site only with this one it's a problem as winnipeg.mb.apartmentjunkie.com/map/ and brandon.mb.apartmentjunkie.com/map/ are two different pages and shouldn't be lumped together as /map/ I know the kneejerk response is likely going to be "hey just setup a different google analytics profile for each subdomain" but there will eventually be a lot of subdomains so google's cap of 50 is going to be too limited and even more important I want to see the data in aggregate for the most part. I am thinking of making a change to the javascript, to something like: _gaq.push(['_trackPageview',String(document.domain) + String(document.location)]); But am unsure if this is the best way and figured someone else on wm.se would have had a similar situation that they could talk a bit about.

    Read the article

  • Improper output in SSH session on OSX using FreeSSHd on Windows with cygwin bash/sh shell

    - by Tyler Clendenin
    I am testing out running an SSH server on a local Windows VM. I have installed FreeSSHd and set the command shell to "c:\cygwin\bin\sh --login -i" (bash as well) with "Use new console engine" unchecked. (When it was enabled no output would show through the ssh connection anyway) The shell seems to work, but when connecting from my OS-X terminal using ssh all of the shell results comes out ill formatted. $ ls -al total 17 drwxr-xr-x+ 1 SYSTEM Administrators 4096 Feb 2 01:00 . drwxrwxrwt+ 1 Administrator Administrators 0 Feb 2 01:01 .. -rw------- 1 SYSTEM Administrators 128 Feb 2 01:30 .bash_history -rwxr-xr-x 1 SYSTEM Administrators 1150 Feb 2 00:55 .bash_profile -rwxr-xr-x 1 SYSTEM Administrators 3754 Feb 2 00:55 .bashrc -rwxr-xr-x 1 SYSTEM Administrators 1461 Feb 2 00:55 .inputrc Any ideas on why this is happening, how I can fix this?

    Read the article

  • Configuring WPA2-Enterprise with Freeradius

    - by Vincent O.
    I'm trying to set up an authenticated wifi network with Freeradius. I've managed to get things working using self-signed certs etc. The problem is Windows clients need to uncheck the "Automatically use my windows logon name and password [etc.]" option in the MSCHAPv2 settings. When I connect to my local university with Eduroam, it automatically asks for a username and password instead of sending windows login credentials. How did the sysadmins accomplish this? Is it some kind of RADIUS Attribute that gets sent back?

    Read the article

  • Installing/Uninstalling Windows 8 UI Apps in Windows 8 for all users

    - by Donotalo
    I'm using Windows 8 Pro 64 bit quite a while now. My account is the only Administrator account on the PC. There are 2 other standard (and local) accounts. I've noticed that if I install an app from Windows Store, that app is only available from my start screen. Also when I uninstall an app that's common for all users (e.g., Finance), it only uninstalled from my account. I want to install app and want it to be available for all users. When I'll uninstall an app, it should be removed for all users. No other user should have access to it. Just like installing/uninstalling programs on previous versions of Windows. How can I do that?

    Read the article

  • Oracle Linux Partner Pavilion Spotlight

    - by Ted Davis
    With the first day of Oracle OpenWorld starting in less than a week, we wanted to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033) this year. We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. We'll be highlighting partners all week so feel free to come back check us out. Centrify delivers integrated software and cloud-based solutions that centrally control, secure and audit access to cross-platform systems, mobile devices and applications by leveraging the infrastructure organizations already own. From the data center and into the cloud, more than 4,500 organizations, including 40 percent of the Fortune 50 and more than 60 Federal agencies, rely on Centrify's identity consolidation and privilege management solutions to reduce IT expenses, strengthen security and meet compliance requirements. Visit Centrify at Oracle OpenWorld 2102 for a look at Centrify Suite and see how you can streamline security management on Oracle Linux.  Unify identities across the enterprise and remove the pain and security issues associated with managing local user accounts by leveraging Active Directory Implement a least-privilege security model with flexible, role-based controls that protect privileged operations while still granting users the privileges they need to perform their job Get a central, global view of audited user sessions across your Oracle Linux environment  "Data Intensity's cloud infrastructure leverages Oracle VM and Oracle Linux to provide highly available enterprise application management solutions.  Engineers will be available to answer questions about and demonstrate the technology, including management tools, configuration do's and don'ts, high availability, live migration, integrating the technology with Oracle software, and how the integrated support process works."    Mellanox’s end-to-end InfiniBand and Ethernet server and storage interconnect solutions deliver the highest performance, efficiency and scalability for enterprise, high-performance cloud and web 2.0 applications. Mellanox’s interconnect solutions accelerate Oracle RAC query throughput performance to reach 50Gb/s compared to TCP/IP based competing solutions that cap off at less than 12Gb/s. Mellanox solutions help Oracle’s Exadata to deliver 10X performance boost at 50% Hardware cost making it the world’s leading database appliance. Thanks for reviewing today's Partner spotlight. We will highlight new partners each day this week leading up to Oracle OpenWorld.

    Read the article

  • DNS resolution over DHCP

    - by Eric
    I have a m0n0wall router a VMWare workstation running ubuntu a windows 7 workstation running the VM The ubuntu hostname is "renraku" From the windows machine I can't resolve dns automatically for this host. For example, when I ping renraku Ping request could not find host renraku. Please check the name and try again. However nslookup seems to work nslookup renraku Server: m0n0wall.local Address: 192.168.123.254 Name: renraku Address: 192.168.123.248 I don't get how to have ping to work with hostnames. The main goal behind this is to have my web server to work with hostnames instead of ip addresses EDIT : ping 192.168.123.248 works

    Read the article

  • Mac OSX Server 10.6 DNS Issues

    - by dallasclark
    Hi, The server was upgraded from 10.5 from 10.6, during the upgrade the Reverse Zones were lost so I tried to recreate these but found that it's best to delete all zones, definitions and start again. So I've started again and Reverse Zones are appearing but I'm still having issues. I receive the following errors (if they help) 01-Nov-2010 12:52:01.254 client 192.168.1.52#57051: view com.apple.ServerAdmin.DNS.public: query (cache) 'server.dev.home.gateway/A/IN' denied 01-Nov-2010 12:59:24.487 client 192.168.1.52#52858: view com.apple.ServerAdmin.DNS.public: query (cache) 'earth.server.dev.home.gateway/A/IN' denied At the moment I have the following setup in the DNS 1.168.192.in-addr.arpa. Reverse Zone 192.168.1.100 Reverse Mapping MacPro-Server.local. server.dev. Primary Zone server.dev. Machine 192.168.1.100 earth.server.dev. Alias server.dev.

    Read the article

  • Hyper-V core NIC speeds and registry changes

    - by gary
    Good afternoon, On a Dell PE T610 I have Hyper-V core running, with 2 x Broadcom BCM5709C NetXtreme II GigE installed. I have noticed that copying large files 17GB for example, from a network physical server to the Hyper-V host local drive [not vm guest] is very slow in comparison to copying from Physical to Physical servers. Copying a 17GB file physical to Hyper-V host takes 30 minutes Copying a 17GB file physical to physical host takes 15 minutes Can someone tell me exactly what registry nodes I should disable on Hyper-V NICs to improve performance. So far I have gone to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class{4 D36E972-E325-11CE-BFC1-08002BE10318} and set the following to 0 on both physical NICs: *LSOv1IPv4 *LSOv2IPv6 *TCPUDPChecksumOffloadIPv4 *TCPUDPChecksumOffloadIPv6 Should I also disable *TCPConnectionOffloadIPv4 & *TCPConnectionOffloadIPv6? Many thanks in advance

    Read the article

  • Can't rdp into new ( or old ) Azure VM

    - by Raif
    I have an Azure account with a VM on it. I haven't used it in about 8 months. I tried to connect today but it wont take my creds. Now I'm not entirely sure that I have my password correct, pretty sure but not entirely. So I created a new VM and set the password. Clicked the Connect button on the portal window, tried to connect and was rejected using the password I know to be correct. I have disabled my local machine firewall and antivirus.

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • Is there any *good* HTML-mode for emacs?

    - by Carson Myers
    I love emacs, and I want to do my web-programming work in it, but I can't find a way to get it to edit HTML properly. I mean it's seriously awful. It will do HTML fine, but not PHP, javascript, etc. I tried getting html-helper-mode... I downloaded it, put it in /usr/local/share/emacs/site-lisp, and added it to my .emacs file: (autoload 'html-helper-mode "html-helper-mode" "Yay HTML" t) (setq auto-mode-alist (cons '("\\.html$" . html-helper-mode) auto-mode-alist)) copied and pasted from some site (I don't know elisp). it just, doesn't highlight anything at all. I tried downloading a whole bunch of modes and using some other mode to string them together, to no avail. Emacs is so great in every other way--why can't it do the simple task of editing web pages? I mean, it's a pretty standard thing to do for editors these days. So, does anyone know how to do this?

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >