Search Results

Search found 15820 results on 633 pages for 'domain transfer'.

Page 201/633 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Windows Server: Change AD account name

    - by Bastien974
    Hello everybody, In my SBS 08 (AD, exchange), is it possible to change the name, email address of a user because he is leaving and I'd like to transfer all the account and credential to the new employee that is replacing him. Lot's of thing are set up for this user and it would save me lots of time if I can transfer an account like this. Thanks for your help !

    Read the article

  • What's the best way to migrate SELECT applications and data from an old Mac to a one?

    - by jaydles
    I know I can make an easy transfer of everything using time machine, but I'm trying to transfer the minimum necessary to the new computer (applications I currently use, Aperture database, etc.) in order to keep the new machine as clean as possible, and avoid starting out with any legacy problems with permissions, etc. Clearly, I can copy individual apps and databases to an external drive and then install them on the new machine, but I'm trying to find an easier way.

    Read the article

  • How to set up linux watchdog daemon with Intel 6300esb

    - by ACiD GRiM
    I've been searching for this on Google for sometime now and I have yet to find proper documentation on how to connect the kernel driver for my 6300esb watchdog timer to /dev/watchdog and ensure that watchdog daemon is keeping it alive. I am using RHEL compatible Scientific Linux 6.3 in a KVM virtual machine by the way Below is everything I've tried so far: dmesg|grep 6300 i6300ESB timer: Intel 6300ESB WatchDog Timer Driver v0.04 i6300ESB timer: initialized (0xffffc900008b8000). heartbeat=30 sec (nowayout=0) | ll /dev/watchdog crw-rw----. 1 root root 10, 130 Sep 22 22:25 /dev/watchdog | /etc/watchdog.conf #ping = 172.31.14.1 #ping = 172.26.1.255 #interface = eth0 file = /var/log/messages #change = 1407 # Uncomment to enable test. Setting one of these values to '0' disables it. # These values will hopefully never reboot your machine during normal use # (if your machine is really hung, the loadavg will go much higher than 25) max-load-1 = 24 max-load-5 = 18 max-load-15 = 12 # Note that this is the number of pages! # To get the real size, check how large the pagesize is on your machine. #min-memory = 1 #repair-binary = /usr/sbin/repair #test-binary = #test-timeout = watchdog-device = /dev/watchdog # Defaults compiled into the binary #temperature-device = #max-temperature = 120 # Defaults compiled into the binary #admin = root interval = 10 #logtick = 1 # This greatly decreases the chance that watchdog won't be scheduled before # your machine is really loaded realtime = yes priority = 1 # Check if syslogd is still running by enabling the following line #pidfile = /var/run/syslogd.pid Now maybe I'm not testing it correctly, but I would expecting that stopping the watchdog service would cause the /dev/watchdog to time out after 30 seconds and I should see the host reboot, however this does not happen. Also, here is my config for the KVM vm <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit sl6template or other application using the libvirt API. --> <domain type='kvm'> <name>sl6template</name> <uuid>960d0ac2-2e6a-5efa-87a3-6bb779e15b6a</uuid> <memory unit='KiB'>262144</memory> <currentMemory unit='KiB'>262144</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <vendor>Intel</vendor> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='ds'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='vme'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='rdtscp'/> <feature policy='require' name='ht'/> <feature policy='require' name='dca'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='pdpe1gb'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='require' name='aes'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/data/vms/sl6template.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:44:57:f6'/> <source bridge='br0.2'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:88:0f:42'/> <source bridge='br1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <watchdog model='i6300esb' action='reset'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </watchdog> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain> Any help is appreciated as the most I've found are patches to kvm and general softdog documentation or IPMI watchdog answers.

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Enabling Google Webmaster Tools With Your GWB Blog

    - by ToStringTheory
    I’ll be honest and save you some time, if you don’t have your own domain for your GWB blog, this won’t help, you may just want to move on…  I don’t want to waste your time……… Still here?  Good.  How great are Google’s website tools?  I don’t just mean Analytics which rocks, but also their Webmaster Tools (https://www.google.com/webmasters/tools/) which gives you a glimpse into the queries that provide you your website traffic, search engine behavior on your site, and important keywords, just to name a few.   Pictured Above: Cool statistics. Problem Thanks to svickn over at wtfnext.com (another GeeksWithBlogs blog), we already have the knowledge on how to setup Google Analytics (wtfnext.com - How to: Set up Google Analytics on your GeeksWithBlogs blog).  However, one of the questions raised in the post, and even semi-answered in the questions, was how to setup Google Webmaster Tools with your blog as well. At first glance, it seems like it can’t be done.  Google graciously gives you several different options on how to authorize that you own a site.  The authentication options are: 1. (Recommended) – Upload an HTML file to your server 2. Add a meta tag to your site’s home page 3. Use your Google Analytics account 4. Add a DNS record to your domain’s configuration Since you don’t have access to the base path, you can’t do #1.  Same goes for #2 since you can’t edit the master/index page.  As for #3, they REQUIRE the Analytics code to be in the <head> section of your page, so even though we can use the workaround of hosting it in the news section, it won’t allow it since it isn’t in the correct place. Solution Last I checked, I didn’t see the DNS record option for Webmaster Tools.  Maybe this was recently added, or maybe I don’t remember it since I was always able to use some other method to authorize it.  In this case though, this is the option that we need.  My registrar wasn’t in their list, but they provide detailed enough instructions for the ‘Other’ option: Simply create a TXT record with your domain hoster (mine is DynDns), fill in the tag information, and then click verify.  My entry was able to be resolved immediately, but since you are working with DNS, it may take longer.  If after 24 hours you still aren’t able to verify, you can use a site such as mxtoolbox.com, and in the searchbox type “txt: {domain-name-here}”, to see if your TXT record was entered successfully. It is pretty simple to setup the TXT entry in DynDns, but if you have questions/comments, feel free to post them. Conclusion With this simple workaround (not really a workaround, but feature since they offer it..), you are now able to see loads of information regarding your standings in the world of the Google Search Engine.  No critical issues?  Did I do something wrong?! As an aside, you can do the same thing with the Bing Webmaster Tools by adding a CNAME record to bing.verify.com…  Instructions can be found on the ‘Add Site’ popup when adding your site. If you don’t have your own domain, but continued, to read to this point – thank you!

    Read the article

  • What is a good encryptable disk image format suitable for rsync on a PC?

    - by Greg Joshner
    I’m looking for a solution to encrypt my XP home directory and then rsync the encrypted image file to a remote server. Since I don’t want to transfer several Gigs for even the smallest change in the image I’m looking for a solution which saves the image “chunked” into smaller files. That way Rsync can transfer only the changed elements. Do you have any ideas? Thanks a lot for your help!

    Read the article

  • Off-the-shelf solutions for migrating data from azure blobstorage to rackspace cloud files

    - by S.C.
    I have large amounts of data (500+ GB) stored in azure blob storage that I need to transfer to rackspace cloud files. I know it is possible to perform such a migration using the SDKs from both services, but is there a free, standard, 1-step process for doing this? I've built a POC utility but would like to avoid having to optimize it to perform the transfer within a reasonable amount of time. Thanks.

    Read the article

  • How to stabilize a disconnecting internet connection?

    - by All
    My internet connection is very interrupting, but it is not sensible for web surfing, as the connection dies for a few seconds and everything is OK. The IP is NOT changing, and just a halt in data transfer. However, it is very annoying for applications needing contact connection like SSH. Since it seems disconnection, SSH closes. Is there any way to stabilize this kind of interrupting connection to keep the connection with zero transfer data to persevere any connection like SSH? I am using Linux (Debian/Ubuntu).

    Read the article

  • Transfering from Mail Enable to Hmail Server

    - by air
    i have one windows server (Plesk 9.5) with MailEnable free Edition, i want to transfer from MailEnable to Hmail Server, i am looking for some program or script to transfer all email accounts, email redirects and emails from MailEnable to Hmail Server. Thank

    Read the article

  • Active Directory Services: PrincipalContext -- What is the DN of a "container" object?

    - by Ranger Pretzel
    I'm currently trying to authenticate via Active Directory Services using the PrincipalContext class. I would like to have my application authenticate to the Domain using Sealed and SSL contexts. In order to do this, I have to use the following constructor of PrincipalContext (link to MSDN page): public PrincipalContext( ContextType contextType, string name, string container, ContextOptions options ) Specifically, I'm using the constructor as so: PrincipalContext domainContext = new PrincipalContext( ContextType.Domain, domain, container, ContextOptions.Sealing | ContextOptions.SecureSocketLayer); MSDN says about "container": The container on the store to use as the root of the context. All queries are performed under this root, and all inserts are performed into this container. For Domain and ApplicationDirectory context types, this parameter is the distinguished name (DN) of a container object. What is the DN of a container object? How do I find out what my container object is? Can I query the Active Directory (or LDAP) server for this?

    Read the article

  • Using DefaultCredentials and DefaultNetworkCredentials

    - by Fred
    Hi, We're having a hard time figuring how these credentials objects work. In fact, they may not work how we expected them to work. Here's an explanation of the current issue. We got 2 servers that needs to talk with each other through webservices. The first one (let's call it Server01) has a Windows Service running as the NetworkService account. The other one (Server02) has ReportingServices running with IIS 6.0. The Windows Service on Server01 is trying to use the Server02's ReportingServices' WebService to generate reports and send them by email. So, here's what we tried so far. Setting the credentials at runtime (This works perfectly fine): rs.Credentials = new NetworkCredentials("user", "pass", "domain"); Now, if we could use a generic user all would be fine, however... we are not allowed to. So, we are trying to use the DefaultCredetials or DefaultNetworkCredentials and pass it to the RS Webservice: `rs.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials OR `rs.Credentials = System.Net.CredentialCache.DefaultCredentials Either way won't work. We're always getting 401 Unauthrorized from IIS. Now, what we know is that if we want to give access to a resource logged as NetworkService, we need to grant it to "DOMAIN\MachineName$" (http://msdn.microsoft.com/en-us/library/ms998320.aspx): Granting Access to a Remote SQL Server If you are accessing a database on another server in the same domain (or in a trusted domain), the Network Service account's network credentials are used to authenticate to the database. The Network Service account's credentials are of the form DomainName\AspNetServer$, where DomainName is the domain of the ASP.NET server and AspNetServer is your Web server name. For example, if your ASP.NET application runs on a server named SVR1 in the domain CONTOSO, the SQL Server sees a database access request from CONTOSO\SVR1$. We assumed that granting access the same way with IIS would work. However, it does not. Or at least, something is not set properly for it to authenticate correctly. So, here are some questions: We've read about "Impersonating Users" somewhere, do we need to set this somewhere in the Windows Service ? Is it possible to grant access to the NetworkService built-in account to a remote IIS server ? Thanks for reading!

    Read the article

  • TFS Proxy server configuration

    - by Raj Kumar
    Hi We have TFS Server on different domain and we are trying to configure TFS Proxy on different domain which connects through internet. Now can anyone please let us know that which user account has to be provided while configuring TFS proxy. If this needs to be the TFS Server than we can't because its on another domain. regards Raj Kumar

    Read the article

  • URL rewriting with mod_rewrite

    - by Steven
    The web server is Apache. I want to rewrite URL so a user won't know the actual directory. For example: The original URL: www.mydomainname.com/en/piecework/piecework.php?piecework_id=11 Expected URL: piecework.mydomainname.com/en/11 I added the following statements in .htaccess: RewriteCond %{HTTP_HOST} ^(?!www)([^.]+)\.mydomainname\.com$ [NC] RewriteRule ^(w+)/(\d+)$ /$1/%1/%1.php?%1_id=$2 [L] Of course I replaced mydomainname with my domain name. .htaccess is placed in the site root, but when I access piecework.mydomainname.com/en/11, I got "Object not found".(Of course I replaced mydomainname with my domain name.) I added the following statements in .htaccess: RewriteRule ^/(.*)/en/piecework/(.*)piecework_id=([0-9]+)(.*) piecework.mydomainname.com/en/$3 Of course I replaced mydomainname with my domain name. .htaccess is placed in the site root, but when I access piecework.mydomainname.com/en/11, I got "Object not found".(Of course I replaced mydomainname with my domain name.) What's wrong?

    Read the article

  • Using DropDownList in EditTemplates of a GridView

    - by vaibhav
    I am working on a GridView in Asp.Net. When initially a the Page Loads, my gridview look like: When a user clicks, to edit a row, I am using edit templates to show 'Domain' in a DropDownList. But problem is , when the DropDownlist gets load with data, it lost the current value of the 'Domain'. i.e If I want to edit 4th Row, its domain which is currently set to 'Computers' is getting changed to 'MBA' which is ofcourse the first element return by the DataSource. I want to display the current value ('computers') as the selected value in DropDownList. But I am unable to get the value of Domain, which is being edited.

    Read the article

  • Detecting REFERRER 301 redirects in AwStats

    - by Riccardo
    About six months ago, I have moved a website to a new domain, and helped migration using 301 redirects into .htaccess of the old domain. This morning I was looking at AwStats log of the new domain, and was surpised to notice that in the "HTTP Status codes"section, 301 redirects score 77% of the whole codes (seems 200 are not tracked here). So, what is the proper meaning of the 301 code in those stats? Does it mean that 77% of traffic is incoming (referrer) from 301 redirects or?

    Read the article

  • How should I use this SetSPN command when installing SharePoint

    - by Paul Rowland
    In the SharePoint install document I have it says, If you use a domain user account for the SQL Server service account, you must make sure that a valid service principal name (SPN) for that account and instance of SQL Server on their database server exists in their environment. This is the case regardless of whether you use NTLM or Kerberos authentication for Office SharePoint Server 2007. You must configure the SPN for that account in the domain using the Setspn.exe command-line tool. Setspn.exe is installed by default on computers running Windows Server 2008. Run the following command on a computer that is joined to the same domain as the user/service account. setspn -a <http/<farmclusterdnsname> <serviceaccountname> What should the parameters be in this case? I guess the serviceaccountname would be 'domain\username' not sure what the first parameter should be though. This is the technet link for SetSPN.

    Read the article

  • Security error accessing Service outside of FlexBuilder

    - by MikeHoss
    I'm very new to Flex and I have what I think it a head-scratcher. I am building a little Flash app that will consume some web services over HTTP. When I am in Flexbuilder and run my app there, it works fine. When I goto to my FlexBuilder project on my OS and double-click on it, it works fine. When I zip up my bin-debug file, I get this error: Security error accessing url faultCode:Channel.Security.Error faultString: 'Security error accessing url' faultDetail:'Destination: DefaultHTTP' So I googled that and got information on about the crossdomain.xml file. Well, I can't put a crossdomain file in the service I am calling, but I can put one somewhere else. So I put the following lines in Flex app: Security.allowDomain("vx1391"); Security.loadPolicyFile("http://vx1391:8080/job/Remote%20FIT%20Runner/ws/trunk/flash-cross-domain.xml"); My cross-domain.xml file is wide-open: &lt;cross-domain-policy&gt; &lt;allow-access-from domain="*"/&gt; </cross-domain-policy> Which I know is bad in a prod enivironment, but right now I just need to get this working locally but outside of FlexBuilder. Anyone want to help out this Flex-noob?

    Read the article

  • Kohana 3: How to find the active item in a dynamic menu

    - by Svish
    Maybe not the best explanation, but hear me out. Say I have the following in a config file called menu.php: // Default controller is 'home' and default action is 'index' return array( 'items' => array( 'Home' => '', 'News' => 'news', 'Resources' => 'resources', ), ); I now want to print this out as a menu, which is pretty simple: foreach(Kohana::config('menu.items') as $title => $uri) { echo '<li>' . HTML::anchor($uri, $title) . '</li>'; } However, I want to find the $uri that matches the current controller and action. And if the action is the default one or not. What I want to end up with is that menu item should have id="active-item" if it is the linking to the current controller, but the default action. And id="active-subitem if it is linking to the current controller and the action is not the default one. Hope that made sense... Anyone able to help me out here? Both in how to do this in Kohana 3 and also how it should be done in Kohana 3. I'm sure there are lots of ways, but yeah... any help is welcome :) Examples: domain.com -- Home should be active-item since it is the default controller domain.com/home -- Home should be active-item domain.com/home/index -- Home should be active-item since index is the default action domain.com/resources -- Resources should be active-item domain.com/resources/get/7 -- Resources should be active-subitem since get is not the default action

    Read the article

  • 301 htaccess redirect: add segment to old URL's

    - by Rick
    I'm trying to make sure old url's aren't broken after the site's URL structure has changed from this: http://www.domain.com/section/entry_name to this: http://www.domain.com/section/event_name/entry_name But to make it a bit more complex, I'm using the segments to sort entries, for example: http://www.domain.com/news/amazing_event/date/asc http://www.domain.com/videos/my_event/title/desc The new structure only effects one particular event (amazing_event) and should leave the other URL's alone. Where do I even begin to tackle this? My current .htaccess looks like: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] Thanks - appreciate any tips.

    Read the article

  • error in installing informatica.

    - by raskart
    "The installer cannot ping the domain.Verify that Informatica Services is running on the domain host and select Retry. STDOUT: [1212] Command [ping] failed with error [[2121] Node [raskart] Domain [Doraskart] Host:Port [Doraskart:6001] has failed to ping back.]. exitcode -1 " This error occured when i tried to install informatica in my system. i checked informatica services and it was running and i have killed it and again started it but still then i have the problem. Please help me in this regard.Thanks in advance.

    Read the article

  • Problems with sending a multipart/alternative email with PHP

    - by saturdayplace
    Here's the script that's builds/sends the email: $boundary = md5(date('U')); $to = $email; $subject = "My Subject"; $headers = "From: [email protected]" . "\r\n". "X-Mailer: PHP/".phpversion() ."\r\n". "MIME-Version: 1.0" . "\r\n". "Content-Type: multipart/alternative; boundary=--$boundary". "\r\n". "Content-Transfer-Encoding: 7bit". "\r\n"; $text = "You really ought remember the birthdays"; $html = '<html> <head> <title>Birthday Reminders for August</title> </head> <body> <p>Here are the birthdays upcoming in August!</p> <table> <tr> <th>Person</th><th>Day</th><th>Month</th><th>Year</th> </tr> <tr> <td>Joe</td><td>3rd</td><td>August</td><td>1970</td> </tr> <tr> <td>Sally</td><td>17th</td><td>August</td><td>1973</td> </tr> </table> </body> </html> '; $message = "Multipart Message coming up" . "\r\n\r\n". "--".$boundary. "Content-Type: text/plain; charset=\"iso-8859-1\"" . "Content-Transfer-Encoding: 7bit". $text. "--".$boundary. "Content-Type: text/html; charset=\"iso-8859-1\"". "Content-Transfer-Encoding: 7bit". $html. "--".$boundary."--"; mail("[email protected]", $subject, $message, $headers); It sends the message just fine, and my recipient receives it, but they get the whole thing in text/plain instead of in multipart/alternative. Viewing the source of the received message gives this (lots of cruft removed): Delivered-To: [email protected] Received: by 10.90.100.4 with SMTP id x4cs111413agb; Wed, 25 Mar 2009 16:39:32 -0700 (PDT) Received: by 10.100.153.6 with SMTP id a6mr85081ane.123.1238024372342; Wed, 25 Mar 2009 16:39:32 -0700 (PDT) Return-Path: <[email protected]> --- snip --- Date: Wed, 25 Mar 2009 17:37:36 -0600 (MDT) Message-Id: <[email protected]> To: [email protected] Subject: My Subject From: [email protected] X-Mailer: PHP/4.3.9 MIME-Version: 1.0 Content-Type: text/plain; boundary="--66131caf569f63b24f43d529d8973560" Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 25 Mar 2009 23:38:30.0531 (UTC) FILETIME=[CDC4E530:01C9ADA2] X-TM-AS-Product-Ver: SMEX-8.0.0.1181-5.600.1016-16540.005 X-TM-AS-Result: No--4.921300-8.000000-31 X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No Multipart Message coming up --66131caf569f63b24f43d529d8973560 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit You really ought remember the birthdays --66131caf569f63b24f43d529d8973560 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: 7bit <html> <head> <title>Birthday Reminders for August</title> </head> <body> <p>Here are the birthdays upcoming in August!</p> <table> <tr> <th>Person</th><th>Day</th><th>Month</th><th>Year</th> </tr> <tr> <td>Joe</td><td>3rd</td><td>August</td><td>1970</td> </tr> <tr> <td>Sally</td><td>17th</td><td>August</td><td>1973</td> </tr> </table> </body> </html> --66131caf569f63b24f43d529d8973560-- It looks like the content-type header is getting changed along the way from multipart/alternative to text/plain. I'm no sysadmin, so if this is a sendmail issue I'm in way over my head. Any suggestions?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >