Search Results

Search found 7238 results on 290 pages for 'step into'.

Page 212/290 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Why is the wrong name server information at crsnic.net & gtld-servers.net ?

    - by danorton
    Did I screw this up? I don’t even know how this might have happened, so I’d like to learn. I’m trying out HostGator’s reseller service and I bought a domain name through it, but I didn’t want the default name servers and so I changed them during the registration. After registration the domain name record is correct everywhere except at whois-servers.net and whois.crsnic.net and it looks like the DNS network is using that same information. $ whois -h whois.enom.com. example.com ... Name Servers: dns1.name-services.com dns2.name-services.com dns3.name-services.com dns4.name-services.com dns5.name-services.com ... $ whois -h whois.crsnic.net. example.com Domain Name: EXAMPLE.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS1.HOSTGATOR.COM Name Server: NS2.HOSTGATOR.COM Status: clientTransferProhibited Updated Date: 01-jun-2010 Creation Date: 31-may-2010 Expiration Date: 31-may-2011 >>> Last update of whois database: Tue, 01 Jun 2010 19:20:47 UTC <<< ... $ dig +norecurse @b.gtld-servers.net. example.com. NS ... ;; AUTHORITY SECTION: example.com. 172763 IN NS ns2.hostgator.com. example.com. 172763 IN NS ns1.hostgator.com. ... My next step is to let HostGator have a look, but first I want to better understand how this happened. Thanks.

    Read the article

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • Can't find gnutls ibrary when executing rpmbuild under non-root

    - by Rilindo
    I am trying to build ntgs from the latest source, using the .spec from rpmforge - as non-root via rpmbuild. During the compile, it fails at this step: checking for GNUTLS... no configure: error: ntfsprogs crypto code requires the gnutls library. error: Bad exit status from /var/tmp/rpm-tmp.78913 (%build) However, I can compile it successfully outside of rpmbuild. So it sounds like it just the matter of library being seen during the build. However, I can confirm that rpmbuild can see the library that gnutls resides: [foo@bar ~]$ rpmbuild -E '%{_libdir}' rpmbuild/SPECS/ntfsprogs.spec /usr/lib Library location: [foo@bar ntfs-3g_ntfsprogs-2012.1.15]$ /sbin/ldconfig -p | grep -i gnutls libgnutls.so.13 (libc6) => /usr/lib/libgnutls.so.13 libgnutls.so (libc6) => /usr/lib/libgnutls.so libgnutls-openssl.so.13 (libc6) => /usr/lib/libgnutls-openssl.so.13 libgnutls-openssl.so (libc6) => /usr/lib/libgnutls-openssl.so libgnutls-extra.so.13 (libc6) => /usr/lib/libgnutls-extra.so.13 libgnutls-extra.so (libc6) => /usr/lib/libgnutls-extra.so What would cause the problem of the library not being seen when you build a RPM? EDIT: Oh yeah, I am running Centos 5.5.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • v2v of RHEL5 box - issues with retaining MAC address

    - by Alex Berry
    For the last week we have been troubleshooting a customer's Red Hat Virtual Machine running on ESXi. We've been using Veeam to try to create a replica off-site and have been having getting it to work on a decent schedule and recently we noticed that there were issues with orphaned snapshots while looking at the datastore. You can see several snapshots in the same folder and it's causing issues with replication and backup, so we decided the cleanest way was to v2v the machine to another datastore so that we had a clean single-vmdk setup to work with, this is where our trouble started. We first started off with a v2v using vmware converter and connecting to the powered on machine as we were having issues doing an offline v2v. This copied fine but when I tried to set a static MAC using this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=507 the new VM wouldn't take the address, it simply obtained a new MAC, received a dhcp lease and then would only boot up to a blank red screen, never the login screen. So the next step was to do an offline v2v, once we finally got it working. Same thing, followed the kb to the letter and still it wouldn't take the MAC. I then tried it again and upon completion I compared both old and new VMX file, copying every identifier and variable possible, then unregistered both VMs, uploaded the new VMX file and booted, only to see the same results. Finally I did the same as above but I copied the disk using DD to a second attached vmdk and then attached this to the new VM, and still no luck. After downloading the modified VMX file after the first boot and comparing it to the original I created I found that the bios uuid had changed from the one I typed in manually, so I'm assuming this may be the snagging point, but I have no idea. I've never had this issue before on a P2V and I'm just wondering if someone could shed some light on this, maybe it's to do with RHEL licencing?

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Setting up MySQL Linux slave with a Windows master

    - by philwilks
    I'm running a MySQL 5.0 database server on Windows Server 2008. The total size of the database is about 1Gb. I make daily backups, but I'd like to step up to having a slave server for extra protection. My thinking was that I wouldn't need the expense of a Windows machine to do this, and a Linux "cloud server" from RackSpace would do the job well for quite a low cost. However I have little experience with Linux, so I have a few questions... Does this sound like a good idea? Is there anything wrong with linking Windows and Linux MySQL servers? Does Linux have the equivalent of Remote Desktop Connection? If so can I use this from a Windows machine? Would a particular Linux distro be well suited to this task? RackSpace offer ArchLinux, CentOS, Debian, Fedora and Ubuntu. My immediate thinking is to go with Ubuntu as I've heard it's more friendly for people coming from a Windows background. Any comments you have would be very appreciated! Phil

    Read the article

  • compile ntp without ssl

    - by Zulakis
    I need to deploy ntp to a very space-critical pxe-imaging-system. (Yes, each KB matters.) Footprint needs to be as small as possible, so I want to compile ntp without linking openssl. According to the manual this is should be possible: If available, the OpenSSL library from http://www.openssl.org is used to support public key cryptography. The library must be built and installed prior to building NTP. The procedures for doing that are included in the OpenSSL documentation. The library is found during the normal NTP configure phase and the interface routines compiled automatically. Only the libcrypto.a library file and openssl header files are needed. If the library is not available or disabled, this step is not required. I already tried out ./configure --without-openssl however, this didn't help. This is my ldd output: ldd ntpd/ntpd linux-gate.so.1 => (0xb7706000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb76d5000) libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7582000) librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7578000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb741d000) /lib/ld-linux.so.2 (0xb7707000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7419000) libz.so.1 => /usr/lib/libz.so.1 (0xb7404000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb73eb000) The system I am compiling on is 32-bit debian lenny using openssl 0.9.8g-15+lenny16. What is the correct configure option to compile ntp without openssl?

    Read the article

  • switchover in postgresql

    - by user1010280
    I am using Postgresql 9.0 with Streaming replication. So, during switchover I follow these steps:- Get the server timestamp on primary. Get the current log position on primary. Set Verify Log location Verify Transaction Received Location Shutdown DB on production. Synchronize the transaction logs from PR to DR. Trigger a failover on the DR Database by creating the trigger file specified in recovery.conf Verify DB Mode on DR Copy the control file from from DR to primary. copy the temporary stats file from DR to primary. copy the history file from DR to primary. Create recovery.conf file. Start Database in standby mode in primary. Verify DB mode on PR At step (6), I have to copy last wal generated on Primary to standby and sync both PR and standby. but this thing takes time to copy files because this remote. So that postgres will keep seraching for wal for long time and after that it stops the server. So I want to know is there any way so that I can ask postgres to stop seraching or locating WAL after shutdown??? because postgres tries to locate this wal every 5 seconds. Please reply as soon as possible..its urgent...

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Calendar booking issue - Exchange 2003 and 2010

    - by NaOH
    In our organization we are running Exchange 2003 and 2010 simultaneously, with the hopes of migrating everyone to Exchange 2010 sometime within the next few months. Everyone is using Outlook 2010. Recently, we had an issue with transaction log storage on the Exchange 2003 server. This was resolved, but for some reason no meeting rooms on the Exchange 2003 server will automatically book meetings any longer. I have played around with this for a while, changing calendar permissions, turning resource scheduling off and back on, etc. No dice. My next step was to try migrating a resource to the Exchange 2010 server. After doing so, and setting it up as a Room, enabling Auto-Accept and removing the EnableDirectBooking registry entry on my PC, I can book a meeting with this room. If EnableDirectBooking is enabled, I get an error message stating: "Meeting Room" declined your meeting because it is recurring. You must book each meeting separately with this resource. This is despite the fact that the meeting I'm attempting to create has no recurrence. Now, I have also created a new test Room from scratch on the Exchange 2010 server, and I can book a meeting with this Room regardless of whether or not I have the EnableDirectBooking reg entry in place. All users here have this registry entry, and I'd rather not have to figure out how to push something out to remove it from every PC. Rather, I'd like to figure out what's different between the configurations of these two meeting rooms so that I could book a meeting room regardless of whether EnableDirectBooking is enabled or not. Any ideas, anyone? Thanks!

    Read the article

  • Getting prompted for password accessing page through script even when client and server are in same

    - by Munawar
    I'm trying to pull up an internal webpage in automated fashion using the methods in 'Internetexplorer.Application' using vbscript. But I'm getting prompted for password, although the client and the server both are in the same domain. Predictably when I manually try to access the web page, I don't have any problem. Only when I try using cscript.exe or iexplore.exe, I get prompted. I'm trying to automate some of the smoke test we do after a new build is deployed. But this password prompt is getting in the way. Following are the system specs Client machine - IE 7.0, OS is Windows server 2003 Server machine - Windows Server 2008 Both are in the same domain. So far I've unsuccessfully tried following to automate the password input system.diagnostics.process.start var WinHttpReq = new ActiveXObject("WinHttp.WinHttpRequest.5.1"); WinHttpReq.Open("GET", "http://website", false); WinHttpReq.SetCredentials("username", "password", 0); Nothing seems to work I checked in IIS. we have only anonymous and forms authentication enabled Is there any configuration setting in the client machine that can be tweaked to bypass this, although I'd hate to do it since you step on the toes of twenty people trying to do that. Preferable way would be to programmatically input it if its possible. Also, if you can suggest a more appropriate forum, that'd be great too. Please help.

    Read the article

  • GNU screen, how to get current sessionname programmatically

    - by Jimm Chen
    [ This can be considered step 2 of my previous question Is it possible to change GNU screen session name after created? ] Actually, I'd like to write a script that can display current screen session name and change current session name. For example: sren armcross It will change the session name to armcross (ARM gcc cross compiler) and output something like: screen session name changed from '25278.pts-15.linux-ic37' to 'armcross' So, the key question now is how to get current session name. Not only for display the old session name, but according to Is it possible to change GNU screen session name after created? , I have to know it(pass to -d -r) before I can change it to something else. Can we use $STY for current session name? No. $STY will not change after you have changed the session name to a user-defined one. However, for command screen -d -r <oldsessname> -X sessionname armcross should be the user-defined name(if ever defined) instead of $STY, otherwise, screen spouts error "No screen session found." Maybe, there is a verbose way, use screen -list to list all sessions(user-defined name listed), then, match the pid part from $STY against those listed sessions and we will find current session's user-defined name. It should not be so verbose for such a straightforward question. Don't you think so? The -d -D and -r -R options seems to expose too much implementation detail to screen's user. It seems, to rename a session, you have to detach it, then do the rename, then reattach it. Right? My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06 Thank you.

    Read the article

  • Will a 2.4Ghz WAP intefere with a 5.0Ghz WAP if placed directly next to each other

    - by Dan
    This is mostly a curiosity question to people who know more about radio and wi-fi than I. The 2.4Ghz band is massively overpopulated near my house to the point of sometimes getting 1000ms pings to the router from only a few feet away. inSSIDer finds at least 10 broadcasting SSID's within around 15 seconds of starting, so this isn't a real surprise to me! Sometimes I can get good results by changing the channel to something like 3 or 8, but it's usually temporary as the others use Auto Channel and hop around. Now, the router I have is capable of 5.0Ghz, as is the laptop I type this on. Switching to 5.0Ghz gives superb results: I can download at ~90Mbps and get consistent 1ms pings. The problem is that only this laptop supports 5.0Ghz! My question: Would I still get decent 5.0Ghz performance if I place a 2.4Ghz access point directly next to my router? And, indeed, will 2.4Ghz continue working as 'normal'? Testing would be an obvious step, but I threw all my superfluous equipment out in a recent house move. My understanding is that I should get good performance, certainly in comparison to having two devices using the same frequency range, but I do believe there will be some impact by the virtue of them being directly next to each other. (Cabling is not an option due to it being a rented house)

    Read the article

  • A tale of two user ids: Why does NFS not recognize a new user id?

    - by user76177
    I have two servers running RHEL6. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's id on client is 502. appuser's id on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed id of user in /etc/passwd on client to be 506 **Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client or done anything else yet.

    Read the article

  • Photo managing software that supports network drives?

    - by musicfreak
    My dad is a photographer in his free time, and he's been using Lightroom to manage his photos. However, recently, we put all of our photos on a NAS drive to allow us to access them from any computer at any time. The problem with this is that Lightroom cannot load catalogs from network drives. We need support for network drives because we'd like to be able to browse the photos from any computer, and for any computer to be able to add photos to the collection. Right now we're just syncing the Lightroom catalog file between us, but the extra step is a pain, and doing it manually makes it error-prone. Is there any software (free or commercial) that has proper support for network drives? The only real feature I need is to be able to sort photos by date and by some sort of tags. I don't need any editing features like those found in Lightroom; my dad is comfortable using Photoshop to edit photos. Also, if there is another solution to this that I haven't thought of, feel free to share.

    Read the article

  • Error during SSL installation cPanel/WHM

    - by baswoni
    I have a dedicated server and I am using the install wizard via WHM to install an SSL certificate. I have the following keys: Certificate key RSA private key CA certificate I paste these three elements into the wizard along with the domain, IP address and username but I get this error: SSL install aborted due to error: Unable to save certificate key. Certificate verification passed Have I missed a step? I have given it another go to make sure I am copying and pasting the info correctly and I am now getting the following error: SSL install aborted due to error: Sorry, you must have a dedicated ip to use this feature for the user: username! If you are intending to install a shared certificate you must use the username "nobody" for security and bandwidth reporting reasons. Even though I am using a dedicated IP address, I am getting this problem. I thought I would also add that this SSL certificate has been installed on a shared hosting environment with my previous hostig provider. The account with them is still active, however the domain and its contents now reside on the dedicated server - could this cause problems?

    Read the article

  • Applying ACLs to a Dovecot public namespace

    - by larsks
    I have a public namespace define in my dovecot (dovecot-2.0.9) configuration that looks like this: namespace { type = public separator = . prefix = news. location = maildir:/var/spool/news subscriptions = no } I would like to make all the mailboxes in this namespace read-only. I've got the following configuration for the ACL plugin: plugin { acl = vfile:/etc/dovecot/acls:cache_secs=300 } After perusing the documentation, it seemed as if I had a mailfolder /var/spool/news/.foo.bar that I could place the following into /var/spool/news/.foo.bar/dovecot-acl: anyone rl But that doesn't have any affect. I also tried creating a file /usr/local/etc/dovecot/acls/news.foo.bar with the same contents, but that didn't do anything, either. I've turned on mail debugging: mail_debug = yes But the log doesn't produce anything that appears to be relevant to ACL processing. I'm curious to know if anyone has gotten this to work correctly and if so if you could provide some configuration examples. Also, if there's any way to do this that doesn't involve per-mailbox configuration (.e.g, the ability to apply an ACL to news.* or something), that would be awesome. Getting the documented behavior for default ACLs working would be a step in the right direction.

    Read the article

  • Finding matching columns in excel

    - by fakaff
    I've never used excel before so I need the simplest solution available, and this is a work assignment due this week so I didn't have time read much of the documentation. Basically, I have two tables, A and B, and they are both thousands of rows long. Description of my task: right now (since I don't know better) I'm manually doing this: Go to row i in table B. Select entries in columns B(a, b, c) of that same row. Look for a row in table A where column A(b) matches row B(a). Paste the entries of columns B(a) of row i at the end of the row found in the last step. Repeat for row i + 1. Example: row B(cat, dog, mouse) matches A(mammal, cat, Mr. Whiskers). So I would paste B after A and have A(mammal, cat, Mr. Whiskers, cat, dog, mouse). Note: I am not joining tables. I am merely extending table A by pasting row A(b) if row A(b) matches row B(a). Also, sometimes entries are spelled slightly differently. Using wildcards to search for candidates would be of help. As the description should let on, this task is very tedious and inefficient if I don't know how to automate some operations (there are thousands of entries). Any quick tips as to how to be more productive is a big help.

    Read the article

  • Failed to re-publish a page - Tridion 2011 SP1

    - by Wilson Yu
    We are getting some strange error when re-publishing the same page. The page was published successfully the first time and we can see the page from presentation server. It failed with the following error (see below) when we tried to publish it again (no change to page). The page ran OK within template builder and we got the correct html output, it failed in the last committing deployment step (Prepare Transport, Transporting, Preparing Deployment and Deploying are all successful). Once it fails to publish the second time, it always fails to publish, and we can't un-publish it either. Also when we make a copy of the failed page and create a new page, we can publish the new page first time, the new page then fails to publush the second time with the same error. Does anyone know what would cause this error? any help would be greatly appreciated. Here is the error msg: Committing Deployment Failed Phase: Deployment Prepare Commit Phase failed, Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: "", Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: ""

    Read the article

  • Joining new DC to AD - DNS name does not exist

    - by Andrew Connell
    I had a DC fail on me recently and trying to add a new one to my domain, although I'm sensing I might have other issues in my domain. I'm a dev at heart and know just enough about AD to be dangerous so looking for some assistance. My working DC is RIVERCITY-DC12. I'm trying to promote RIVERCITY-DC14 as a DC to the RIVERCITY domain, but when I run DCPROMO, at the NETWORK CREDENTIALS step where I point to the name of the domain (rivercity.local), I get "An AD DC for the domain rivercity.local cannot be contacted" and in the details see "The error was DNS name does not exist" Looking at RIVERCITY-DC12, I can see DNS is working, I've been able to query it from other machines in my domain, and no errors are reported in the DNS category within the Event Viewer. When I checked the FMSO roles, it shows RIVERCITY-DC12 is the machine for all listed roles. Not sure what I should do next or how to troubleshoot/investigate after searching around for a solution... ideas? Environment: Domain: rivercity (rivercity.local) Forest functional level: Windows 2000 (I'm more than happy to raise this) Windows Server 2008 All servers are Windows Server 2008 R2 SP1 (fully patched)

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • SSH tunnel for socks5 proxy is slow with concurrent load

    - by RawwrBag
    I ssh to a remote AWS server using Ubuntu. I use ssh's port forwarding capabilities to do this. I have tried forwarding a dynamic port (ssh -D) or a single port (ssh -L with dante running as a remote socks server). Both are equally slow. I also tried different ciphers (ssh -c). Concurrent TCP connections pretty much do not work. For example, I can go to speedtest.net and start a test (which is fairly fast, probably maxes out my line speed) and if I try and do anything (i.e. load google.com) while the test is still running, all the additional connections seem to hang until the speed test is over. I realize OpenSSH is single-threaded. Is this the problem? It doesn't even show up on my top. Same goes for sshd on the remote server -- no processor hit. Is there anyway to bump ssh performance or should I step up to OpenVPN or something better suited for this?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >