Search Results

Search found 8839 results on 354 pages for 'optional parameters'.

Page 333/354 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Postfix sasl login failing no mechanism found

    - by Nat45928
    following the link here: http://flurdy.com/docs/postfix/ with posfix, courier, MySql, and sasl gave me a web server that has imap functionality working fine but when i go to log into the server to send a message using the same user id and password for connecting the the imap server it rejects my login to the smtp server. If i do not specify a login for the outgoing mail server then it will send the message just fine. the error in postfix's log is: Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: connect from unknown[10.0.0.50] Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: SASL authentication failure: unable to canonify user and get auxprops Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL DIGEST-MD5 authentication failed: no mechanism available Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL LOGIN authentication failed: no mechanism available Ive checked all usernames and passwords for mysql. what could be going wrong? edit: here is some other information: installed libraires for postfix, courier and sasl: aptitude install postfix postfix-mysql aptitude install libsasl2-modules libsasl2-modules-sql libgsasl7 libauthen-sasl-cyrus-perl sasl2-bin libpam-mysql aptitude install courier-base courier-authdaemon courier-authlib-mysql courier-imap courier-imap-ssl courier-ssl and here is my /etc/postfix/main.cf myorigin = domain.com smtpd_banner = $myhostname ESMTP $mail_name biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. #myhostname = my hostname alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname local_recipient_maps = mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mynetworks_style = host # how long if undelivered before sending warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, permit # Requirements for the sender details smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 # SASL smtpd_sasl_auth_enable = yes # If your potential clients use Outlook Express or other older clients # this needs to be set to yes broken_sasl_auth_clients = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain =

    Read the article

  • NetBackup with VSS and Instant Recovery - Failing to delete old snapshots

    - by Jonathan Bourke
    We are attempting to implement Microsoft VSS for snap-shotting in our NetBackup 6.5.3.1 environment. The clients are both 32 & 64 bit Windows 2003 Server. Snapshot parameters are: Instant recovery is enabled Maximum snapshots = 1 Provider type = 1 (System) Snapshot attribute = 1 (Differential) All backups successfully complete, and VSS shadows are successfully created both for the snapshot backup and for the open files (shadow copy components). The Issue: NetBackup is not clearing or overwriting old snapshots with each successive backup. When we list shadows, and shadow storage, it is increasing and increasing. IT is not honouring the Maximum Snapshot setting. The Logs: The bpfis log doesn’t really appear to show any errors other than for methods which we are not employing (VxVM, Flashsnap, etc.). A section is as follows: 11:54:10.744 [348.4724] <2> logparams: D:\Program Files\Veritas\NetBackup\bin\bpfis.exe delete -nbu -id htpststr001.san.mgmt.det_1248918143 -bpstart_to 300 -bpend_to 300 -clnt htpststr001.san.mgmt.det 11:54:10.744 [348.4724] <4> bpfis: INF - BACKUP START 348 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: FlashSnap, type: FIM, function: FlashSnap_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: FlashSnap_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: vxvm, type: FIM, function: vxvm_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vxvm_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <4> onlfi_thaw: Thawing C:\ using snapshot method VSS. 11:54:11.713 [348.4724] <2> onlfi_vfms_logf: vfm_thaw: delete snapshot ... 11:54:11.744 [348.4724] <2> onlfi_vfms_logf: snapshot services: emcclariionfi:Thu Jul 30 2009 11:54:11.744000 <Thread id - 4724> Unable to import any login credentials for any appliances. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpevafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> CHpEvaPlugin::init: CLI tool is not installed. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpmsafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> No array mangement credentials are available in configuration file. 11:54:13.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:13.806 [348.4724] <4> onlfi_thaw: Thawing D:\ using snapshot method VSS. 11:54:15.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0.fiid 11:54:19.853 [348.4724] <4> bpfis: INF - EXIT STATUS 0: the requested operation was successfully completed The Question: Has anyone any experience of NetBackup / VSS not clearing snapshots after backups? We will ultimately be using a HP EVA for the snapshots, but we want to ensure correct functioning at a VSS level before we go further. Regards, Jonathan (PS: Question previously posted by my colleague on entsupport.symantec.com)

    Read the article

  • Strange Upload Problem on Hyper-V

    - by Ring0
    Hi, This one is driving me totally nuts. I have being trying to upload a file to www.virustotal.com (its a harmless exe I have since found out - DiskWipe.exe from diskwipe.org). Using IE8. From Win 7 and Win 2008 R2 Datacenter (which I select to boot from vhd's) onto my main machine hardware, and also on another Win 7 PC elsewhere on my network, when I upload the file to virustotal.com it works perfectly. So, using my native NIC's everything is fine. Using another machine also perfect. Right. OK, from my boot menu the default is my main development machine - the one I'm typing on now. This runs on the metal and has Hyper-V role and I have some guests. All guests are not running. Amazingly, from my console (root partition to be exact) or any guest OS 2003 /XP / 2008 R2 etc. My upload to virustotal.com slows at 32% then HANGS at 38.something% & never finishes!! Here is the kicker. I have another box (my main server) running Hyper-V on the metal and three live guests. Identical H/W to my main dev machine in another room. (Except OS is Datacenter - Mine is Enterprise). If I try and upload from its bare metal console or any guest this file to virustotal.com using IE8 it stops exactly in the same place!! As for "steps I have tried etc." are kind-of blown out of the water as my server box is doing the precise same thing as the machine in my room here. OK, comonalities: Mobo: Gigabyte GA-X58-UD5, 12GB Kingston RAM, Corei7 920 4 cores hyperthreading = 8 & Realtek RTL8168D/8111D Family PCI-E Gigabit Ethernet NIC's. All 3 machines have this same motherboard - revision F11 Bios, all have 12GB RAM, all have the Realtek Nic's. All x64 by the way as I mentioned before I have a Win 7 box also with the UD5 m/Board, 12 GB RAM - bit of an overkill. :-) All these machines when NOT running Hyper-V can upload this file. Perhaps you may like to try it on a Hyepr-v (2008 R2) yourselves with IE8 and the desktop experience is on. See if it works or fails for you. Root OS or any guest. So, looking like its the NIC + Hyper-V = Cannot upload this file (any file I must add.) Realtek Nic is Ver 7.002.1125.2008. Using IE8 I see in the nic settings there are the usual parameters for Jumbo frames / Checksum offloading etc. several others. Should I fiddle with these? I ran Netmon 3.3 in a guest and the TCP session halted as the upload failed. I suppose I could study that further. I dont have Netmon on the root partition machine (yet)! All OS's fully patched - including todays defender files. My box running Office 2007 - but identical server in another room is not. Also, if I fire up a VPN to a distant client and do the upload it works! Of course its a different network path. Suggestions welcome please. If I left out anything important - please yell at me. Many Thanks,

    Read the article

  • How to troubleshoot a PHP script that causes a Segmenation Fault?

    - by johnlai2004
    I posted this on stackoverflow.com as well because I'm not sure if this is a programming problem or a server problem. I'm using ubuntu 9.10, apache2, mysql5 and php5. I've noticed an unusual problem with some of my php programs. Sometimes when visiting a page like profile.edit.php, the browser throws a dialogue box asking to download profile.edit.php page. When I download it, there's nothing in the file. profile.edit.php is supposed to be a web form that edits user information. I've noticed this on some of my other php pages as well. I look in my apache error logs, and I see a segmentation fault message: [Mon Mar 08 15:40:10 2010] [notice] child pid 480 exit signal Segmentation fault (11) And also, the issue may or may not appear depending on which server I deploy my application too. Additonal Details This doesn't happen all the time though. It only happens sometimes. For example, profile.edit.php will load properly. But as soon as I hit the save button (form action="profile.edit.php?save=true"), then the page asks me to download profile.edit.php. Could it be that sometimes my php scripts consume too much resources? Sample code Upon save action, my profile.edit.php includes a data_access_object.php file. I traced the code in data_access_object.php to this line here if($params[$this->primaryKey]) { $q = "UPDATE $this->tableName SET ".implode(', ', $fields)." WHERE ".$this->primaryKey." = ?$this->primaryKey"; $this->bind($this->primaryKey, $params[$this->primaryKey], $this->tblFields[$this->primaryKey]['mysqlitype']); } else { $q = "INSERT $this->tableName SET ".implode(', ', $fields); } // Code executes perfectly up to this point // echo 'print this'; exit; // if i uncomment this line, profile.edit.php will actually show 'print this'. If I leave it commented, the browser will ask me to download profile.edit.php if(!$this->execute($q)){ $this->errorSave = -3; return false;} // When I jumped into the function execute(), every line executed as expected, right up to the return statement. And if it helps, here's the function execute($sql) in data_access_object.php function execute($sql) { // find all list types and explode them // eg. turn ?listId into ?listId0,?listId1,?listId2 $arrListParam = array_bubble_up('arrayName', $this->arrBind); foreach($arrListParam as $listName) if($listName) { $explodeParam = array(); $arrList = $this->arrBind[$listName]['value']; foreach($arrList as $key=>$val) { $newParamName = $listName.$key; $this->bind($newParamName,$val,$this->arrBind[$listName]['type']); $explodeParam[] = '?'.$newParamName; } $sql = str_replace("?$listName", implode(',',$explodeParam), $sql); } // replace all ?varName with ? for syntax compliance $sqlParsed = preg_replace('/\?[\w\d_\.]+/', '?', $sql); $this->stmt->prepare($sqlParsed); // grab all the parameters from the sql to create bind conditions preg_match_all('/\?[\w\d_\.]+/', $sql, $matches); $matches = $matches[0]; // store bind conditions $types = ''; $params = array(); foreach($matches as $paramName) { $types .= $this->arrBind[str_replace('?', '', $paramName)]['type']; $params[] = $this->arrBind[str_replace('?', '', $paramName)]['value']; } $input = array('types'=>$types) + $params; // bind it if(!empty($types)) call_user_func_array(array($this->stmt, 'bind_param'), $input); $stat = $this->stmt->execute(); if($GLOBALS['DEBUG_SQL']) echo '<p style="font-weight:bold;">SQL error after execution:</p> ' . $this->stmt->error.'<p>&nbsp;</p>'; $this->arrBind = array(); return $stat; }

    Read the article

  • SQLExpress service unable to start Error code 17053

    - by Chris Sobolewski
    A user was instructed by their software support to upgrade a program and install SQLExpress as part of the installation process. Since that time, the service has been able to start, citing error 17053, which appears to be an authentication issue. Here is the error log: 2011-01-11 13:17:45.50 Server Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 5.1 (Build 2600: Service Pack 2) 2011-01-11 13:17:45.50 Server (c) 2005 Microsoft Corporation. 2011-01-11 13:17:45.50 Server All rights reserved. 2011-01-11 13:17:45.50 Server Server process ID is 3332. 2011-01-11 13:17:45.50 Server Authentication mode is WINDOWS-ONLY. 2011-01-11 13:17:45.50 Server Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG'. 2011-01-11 13:17:45.52 Server This instance of SQL Server last reported using a process ID of 2332 at 11/10/2010 2:15:24 PM (local) 11/10/2010 7:15:24 PM (UTC). This is an informational message only; no user action is required. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 2011-01-11 13:17:45.52 Server Registry startup parameters: 2011-01-11 13:17:45.52 Server -d c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf 2011-01-11 13:17:45.52 Server -e c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG 2011-01-11 13:17:45.52 Server -l c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf 2011-01-11 13:17:45.52 Server Error: 17113, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server Error 3(The system cannot find the path specified.) occurred while opening file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 4 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:08:21.34 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 12:47:20.85 spid5s SQL Trace ID 1 was started by login "sa". 2011-01-11 12:47:20.90 spid5s Starting up database 'mssqlsystemresource'. 2011-01-11 12:47:20.93 spid5s The resource database build version is 9.00.3042. This is an informational message only. No user action is required. 2011-01-11 12:47:21.21 spid5s Error: 15466, Severity: 16, State: 1. 2011-01-11 12:47:21.21 spid5s An error occurred during decryption. 2011-01-11 12:47:21.38 spid8s Starting up database 'model'. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x90. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x1. 2011-01-11 12:47:21.38 Server Error: 17826, Severity: 18, State: 3. 2011-01-11 12:47:21.38 Server Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2011-01-11 12:47:21.38 Server Error: 17120, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. One lead I had was to change the SQL logon account from "Network Service" to "Local System". Unfortunately, that is resulting in the error message The Security ID Structure is Invalid [0x80070539] Any help either uninstalling or getting SQLExpress running would be fantastic.

    Read the article

  • SBS2003 to SBS2011 Migration - Installation Error

    - by Shawn Gradwell
    Microsoft Small Business Server 2003 to 2011 Migration. I followed the Migration Guide from Microsoft and the source server had no errors when running the various tests prior to the migration. I have completed the destination server setup using the Answer File and the server is up and running. It all looks good, I can access Exchange and AD and the only problem is the error message when you log in stating that the setup did not complete and to check the logs. Because all looks good I am continuing the migration to the destination server. I also have to state that this client does not use Sharepoint at all. Do I have to redo everything? Herewith the logs: [4992] 121016.225454.5905: Task: Starting Add User or Group access VSS registry. [4992] 121016.225454.7645: TaskManagement: In TaskScheduler.RunTasks(): The "ConfigureSharePointVSSRegistryTask" Task threw an Exception during the Run() call:System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) [4992] 121016.225454.7655: Setup: An error was encountered on the TME thread: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) [4956] 121016.225455.0685: Setup: _UnhandledExceptionHandler: Setup encountered an error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: The TME thread failed (see the inner exception). ---> System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter.TasksCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass._LaunchWizard() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.RealMain(String[] args) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.Main(String[] args) [4956] 121016.225455.0865: Setup: Removed the password. [4956] 121016.225455.0905: Setup: Deleting scheduled task at path Microsoft\Windows\Windows Small Business Server 2011 Standard with name Setup [4956] 121016.225455.8055: Setup: Removed SBSSetup from the RunOnce.

    Read the article

  • Connection Timed Out - Simple outbound Postfix for PHP Contact form

    - by BLaZuRE
    Alright, so I only got Postfix for a PHP contact form that will send email to a single . I only want it to send out mail to a single external address ([email protected]). I have domain sub1.sub2.domain.com. I installed Postfix out of the Ubuntu repo, with minimal config changes. I cannot get Postfix to send mail externally (though it succeeds for internal accounts, which is unnecessary). The email simply defers if I generate an email using PHP mail(). If I try to form my own in telnet, right after rcpt to: [email][email protected][/email], I get a postfix/smtpd[31606]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: example.com; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> when commenting out default_transport = error and relay_transport = error lines, I get the following: Jun 26 14:33:00 sub1 postfix/smtp[12191]: 2DA06F88206A: to=<[email protected]>, relay=none, delay=514, delays=409/0.01/105/0, dsn=4.4.1, status=deferred (connect to aspmx3.googlemail.com[74.125.127.27]:25: Connection timed out) Jun 26 14:36:36 sub1 postfix/smtp[12225]: connect to mta7.am0.yahoodns.net[98.139.175.224]:25: Connection timed out Jun 26 14:38:00 sub1 postfix/smtp[12225]: 22952F88208E: to=<[email protected]>, relay=none, delay=655, delays=550/0.01/105/0, dsn=4.4.1, status=deferred (connect to mta5.am0.yahoodns.net[67.195.168.230]:25: Connection timed out) My main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = sub1.sub2.domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = sub1.sub2.domain.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = error relay_transport = error Also, a dig sub1.sub2.domain.com MX returns: ; <<>> DiG 9.7.0-P1 <<>> sub1.sub2.domain.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4853 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;sub1.sub2.domain.com. IN MX ;; AUTHORITY SECTION: sub2.domain.com. 600 IN SOA sub2.domain.com. sub5.domain.com. 2012062915 7200 600 1209600 600 ;; Query time: 0 msec ;; SERVER: x.x.x.x#53(x.x.x.x) ;; WHEN: Fri Jun 29 16:35:00 2012 ;; MSG SIZE rcvd: 84 lsof -i returns empty netstat -t -a | grep LISTEN returns tcp 0 0 localhost:mysql *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 [::]:www [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN

    Read the article

  • PHP.ini does not load

    - by Jonathan Park
    Ok this is probably just me not knowing enough about php but here it goes. I'm on Ubuntu Hardy. I have a custom compiled version of PHP which I have compiled with these parameters. ./configure --enable-soap --with-zlib --with-mysql --with-apxs2=[correct path] --with-config-file-path=[correct path] --with-mysqli --with-curlwrappers --with-curl --with-mcrypt I have used the command pecl install pecl_http to install the http.so extension. It is in the correct module directory for my php.ini. My php.ini is loading and I can change things within the ini and effect php. I have included the extension=http.so line in my php.ini. That worked fine. Until I added these compilation options in order to add imap --with-openssl --with-kerberos --with-imap --with-imap-ssl Which failed because I needed the c-client library which I fixed by apt-get install libc-client-dev After which php compiles fine and I have working imap support, woo. HOWEVER, now all my calls to HttpRequest which is part of the pecl_http extention in http.so result in Fatal error: Class 'HttpRequest' not found errors. I figure the http.so module is no longer loading for one reason or another but I cannot find any errors showing the reason. You might say "Have you tried undoing the new imap setup?" To which I will answer. Yes I have. I directly undid all my config changes and uninstalled the c-client library and I still can't get it to work. I thought that's weird... I have made no changes that would have resulted in this issue. After looking at that I have also discovered that not only is the http extension no longer loading but all my extensions loaded via php.ini are no longer loading. Can someone at least give me some further debugging steps? So far I have tried enabling all errors including startup errors in my php.ini which works for other errors, but I'm not seeing any startup errors either on command line or via apache. And yet again the php.ini appears to be being parsed given that if I run php_info() I get settings that are in the php.ini. Edit it appears that only some of the php.ini settings are being listened to. Is there a way to test my php.ini? Edit Edit It appears I am mistaken again and the php.ini is not being loaded at all any longer. However, If I run php_info() I get that it's looking for my php.ini in the correct location. Edit Edit Edit My config is at the config file path location below but it says no config file loaded. WTF Permission issue? It is currently 644 so everyone should be able to read it if not write it. I tried making it 777 and that didn't work. Configuration File (php.ini) Path /etc/php.ini Loaded Configuration File (none) Edit Edit Edit Edit By loading the ini on the command line using the -c command I am able to run my files and using -m shows that my modules load So nothing is wrong with the php.ini

    Read the article

  • Rewriting Apache URLs to use only paths and set response headers

    - by jabley
    I have apache httpd in front of an application running in Tomcat. The application exposes URLs of the form: /path/to/images?id={an-image-id} The entities returned by such URLs are images (even though URIs are opaque, I find human-friendly ones are easier to work with!). The application does not set caching directives on the image response, so I've added that via Apache. # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Note that I can't use ExpiresByType since not all images served by the app have versioned URIs. I know that ones served by the /path/to/images resource handler are versioned URIs though, which don't perform any sort of content negotiation, and thus are ripe for Far Future Expires management. This is working well for us. Now a requirement has come up to put something else in front of the app (in this case, Amazon CloudFront) to further distribute and cache some of the content. Amazon CloudFront will not pass query string parameters through to my origin server. I thought I would be able to work around this, by changing my apache config appropriately: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> This works fine in terms of serving the content, but there are no longer caching directives with the response. I've tried playing around with [PT], [P] for the RewriteRule, and adding a new LocationMatch directive: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources # /new/path/to/images/12345 -> /path/to/images?id=12345 RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> <LocationMatch "^/new/path/to/images/"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Unfortunately, I'm still unable to get the Cache-Control header added to the response with the new URL format. Please point out what I'm missing to get /new/path/to/images/12345 returning a 200 response with a Cache-Control: max-age=8640000 header. Pointers as to how to debug apache like this would be appreciated as well!

    Read the article

  • Does ModSecurity 2.7.1 work with ASP.NET MVC 3?

    - by autonomatt
    I'm trying to get ModSecurity 2.7.1 to work with an ASP.NET MVC 3 website. The installation ran without errors and looking at the event log, ModSecurity is starting up successfully. I am using the modsecurity.conf-recommended file to set the basic rules. The problem I'm having is that whenever I am POSTing some form data, it doesn't get through to the controller action (or model binder). I have SecRuleEngine set to DetectionOnly. I have SecRequestBodyAccess set to On. With these settings, the body of the POST never reaches the controller action. If I set SecRequestBodyAccess to Off it works, so it's definitely something to do with how ModSecurity forwards the body data. The ModSecurity debug shows the following (looks to me as if all passed through): Second phase starting (dcfg 94b750). Input filter: Reading request body. Adding request argument (BODY): name "[0].IsSelected", value "on" Adding request argument (BODY): name "[0].Quantity", value "1" Adding request argument (BODY): name "[0].VariantSku", value "047861" Adding request argument (BODY): name "[1].Quantity", value "0" Adding request argument (BODY): name "[1].VariantSku", value "047862" Input filter: Completed receiving request body (length 115). Starting phase REQUEST_BODY. Recipe: Invoking rule 94c620; [file "*********************"] [line "54"] [id "200001"]. Rule 94c620: SecRule "REQBODY_ERROR" "!@eq 0" "phase:2,auditlog,id:200001,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:%{reqbody_error_msg},severity:2" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against REQBODY_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 5549c38; [file "*********************"] [line "75"] [id "200002"]. Rule 5549c38: SecRule "MULTIPART_STRICT_ERROR" "!@eq 0" "phase:2,auditlog,id:200002,t:none,log,deny,status:44,msg:'Multipart request body failed strict validation: PE %{REQBODY_PROCESSOR_ERROR}, BQ %{MULTIPART_BOUNDARY_QUOTED}, BW %{MULTIPART_BOUNDARY_WHITESPACE}, DB %{MULTIPART_DATA_BEFORE}, DA %{MULTIPART_DATA_AFTER}, HF %{MULTIPART_HEADER_FOLDING}, LF %{MULTIPART_LF_LINE}, SM %{MULTIPART_MISSING_SEMICOLON}, IQ %{MULTIPART_INVALID_QUOTING}, IP %{MULTIPART_INVALID_PART}, IH %{MULTIPART_INVALID_HEADER_FOLDING}, FL %{MULTIPART_FILE_LIMIT_EXCEEDED}'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_STRICT_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554bd70; [file "********************"] [line "80"] [id "200003"]. Rule 554bd70: SecRule "MULTIPART_UNMATCHED_BOUNDARY" "!@eq 0" "phase:2,auditlog,id:200003,t:none,log,deny,status:44,msg:'Multipart parser detected a possible unmatched boundary.'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_UNMATCHED_BOUNDARY. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554cbe0; [file "*********************************"] [line "94"] [id "200004"]. Rule 554cbe0: SecRule "TX:/^MSC_/" "!@streq 0" "phase:2,log,auditlog,id:200004,t:none,deny,msg:'ModSecurity internal error flagged: %{MATCHED_VAR_NAME}'" Rule returned 0. Hook insert_filter: Adding input forwarding filter (r 5541fc0). Hook insert_filter: Adding output filter (r 5541fc0). Initialising logging. Starting phase LOGGING. Recording persistent data took 0 microseconds. Audit log: Ignoring a non-relevant request. I can't see anything unusual in Fiddler. I'm using a ViewModel in the parameters of my action. No data is bound if SecRequestBodyAccess is set to On. I'm even logging all the Request.Form.Keys and values via log4net, but not getting any values there either. I'm starting to wonder if ModSecurity actually works with ASP.NET MVC or if there is some conflict with the ModSecurity http Module and the model binder kicking in. Does anyone have any suggestions or can anyone confirm they have ModSecurity working with an ASP.NET MVC website?

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • vlans on openvz, centos 6

    - by arheops
    i have centos 6 with openvz installed on it, switch with vlan support. I need following setup: 1) eth0 on openvz have be tagged multiple vlans. 2) each virtualhost have to be in single vlan. yes,i already read wiki on openvz, but it is just not work. I have on main server interface eth0.108 and able ping address on that interface(using nootbook on untagged port vlan 108), but i not able ping address inside container. Main node: [root@box1 conf]# ifconfig eth0 Link encap:Ethernet HWaddr D0:67:E5:F4:11:60 inet6 addr: fe80::d267:e5ff:fef4:1160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:506 errors:0 dropped:0 overruns:0 frame:0 TX packets:25 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:68939 (67.3 KiB) TX bytes:1780 (1.7 KiB) Interrupt:16 Memory:c0000000-c0012800 eth0.108 Link encap:Ethernet HWaddr D0:67:E5:F4:11:60 inet addr:10.11.108.3 Bcast:10.11.111.255 Mask:255.255.252.0 inet6 addr: fe80::d267:e5ff:fef4:1160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:238 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:12 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:25890 (25.2 KiB) TX bytes:926 (926.0 b) eth1 Link encap:Ethernet HWaddr D0:67:E5:F4:11:61 inet addr:192.168.23.233 Bcast:192.168.23.255 Mask:255.255.255.0 inet6 addr: fe80::d267:e5ff:fef4:1161/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1967 errors:0 dropped:0 overruns:0 frame:0 TX packets:356 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:365298 (356.7 KiB) TX bytes:115007 (112.3 KiB) Interrupt:17 Memory:c2000000-c2012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:784 (784.0 b) TX bytes:784 (784.0 b) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) veth108.0 Link encap:Ethernet HWaddr 00:18:51:DA:94:D5 inet6 addr: fe80::218:51ff:feda:94d5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:639 errors:0 dropped:0 overruns:0 frame:0 TX packets:5 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:17996 (17.5 KiB) TX bytes:308 (308.0 b) virtual node [root@pbx108 /]# ifconfig eth0.108 Link encap:Ethernet HWaddr 00:18:51:CA:B5:C5 inet addr:10.11.108.1 Bcast:10.11.111.255 Mask:255.255.252.0 inet6 addr: fe80::218:51ff:feca:b5c5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:685 errors:0 dropped:2 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:308 (308.0 b) TX bytes:19284 (18.8 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:683 errors:0 dropped:0 overruns:0 frame:0 TX packets:683 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:76288 (74.5 KiB) TX bytes:76288 (74.5 KiB) /etc/vz/conf/108.conf # RAM PHYSPAGES="0:4000M" # Swap SWAPPAGES="0:512M" # Disk quota parameters (in form of softlimit:hardlimit) DISKSPACE="200G:200G" DISKINODES="20000000:22000000" QUOTATIME="0" # CPU fair scheduler parameter CPUUNITS="4000" VE_ROOT="/vz/root/$VEID" VE_PRIVATE="/vz/private/$VEID" OSTEMPLATE="centos-6-x86_64" ORIGIN_SAMPLE="vswap-256m" NETIF="ifname=eth0.108,mac=00:18:51:CA:B5:C5,host_ifname=veth108.0,host_mac=00:18:51:DA:94:D5" NAMESERVER="8.8.8.8" HOSTNAME="pbx108.localhost" IP_ADDRESS=""

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • obtaining nimbuzz server certificate for nmdecrypt expert in NetMon

    - by lurscher
    I'm using Network Monitor 3.4 with the nmdecrypt expert. I'm opening a nimbuzz conversation node in the conversation window and i click Expert- nmDecrpt - run Expert that shows up a window where i have to add the server certificate. I am not sure how to retrieve the server certificate for nimbuzz XMPP chat service. Any idea how to do this? this question is a follow up question of this one. Edit for some background so it might be that this is encrypted with the server pubkey and i cannot retrieve the message, unless i debug the native binary and try to intercept the encryption code. I have a test client (using agsXMPP) that is able to connect with nimbuzz with no problems. the only thing that is not working is adding invisible mode. It seems this is some packet sent from the official client during login which i want to obtain. any suggestions to try to grab this info would be greatly appreciated. Maybe i should get myself (and learn) IDA pro? This is what i get inspecting the TLS frames on Network Monitor: Frame: Number = 81, Captured Frame Length = 769, MediaType = ETHERNET + Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[...],SourceAddress:[....] + Ipv4: Src = ..., Dest = 192.168.2.101, Next Protocol = TCP, Packet ID = 9939, Total IP Length = 755 - Tcp: Flags=...AP..., SrcPort=5222, DstPort=3578, PayloadLen=715, Seq=4101074854 - 4101075569, Ack=1127356300, Win=4050 (scale factor 0x0) = 4050 SrcPort: 5222 DstPort: 3578 SequenceNumber: 4101074854 (0xF4716FA6) AcknowledgementNumber: 1127356300 (0x4332178C) + DataOffset: 80 (0x50) + Flags: ...AP... Window: 4050 (scale factor 0x0) = 4050 Checksum: 0x8841, Good UrgentPointer: 0 (0x0) TCPPayload: SourcePort = 5222, DestinationPort = 3578 TLSSSLData: Transport Layer Security (TLS) Payload Data - TLS: TLS Rec Layer-1 HandShake: Server Hello.; TLS Rec Layer-2 HandShake: Certificate.; TLS Rec Layer-3 HandShake: Server Hello Done. - TlsRecordLayer: TLS Rec Layer-1 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 42 (0x2A) - SSLHandshake: SSL HandShake ServerHello(0x02) HandShakeType: ServerHello(0x02) Length: 38 (0x26) - ServerHello: 0x1 + Version: TLS 1.0 + RandomBytes: SessionIDLength: 0 (0x0) TLSCipherSuite: TLS_RSA_WITH_AES_256_CBC_SHA { 0x00, 0x35 } CompressionMethod: 0 (0x0) - TlsRecordLayer: TLS Rec Layer-2 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 654 (0x28E) - SSLHandshake: SSL HandShake Certificate(0x0B) HandShakeType: Certificate(0x0B) Length: 650 (0x28A) - Cert: 0x1 CertLength: 647 (0x287) - Certificates: CertificateLength: 644 (0x284) - X509Cert: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: - TbsCertificate: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: + Tag0: + Version: (2) + SerialNumber: -1018418383 + Signature: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Issuer: nimbuzz.com,Nimbuzz,NL - RdnSequence: nimbuzz.com,Nimbuzz,NL + SequenceOfHeader: 0x1 + Name: NL + Name: Nimbuzz + Name: nimbuzz.com + Validity: From: 02/22/10 20:22:32 UTC To: 02/20/20 20:22:32 UTC + Subject: nimbuzz.com,Nimbuzz,NL - SubjectPublicKeyInfo: RsaEncryption (1.2.840.113549.1.1.1) + SequenceHeader: + Algorithm: RsaEncryption (1.2.840.113549.1.1.1) - SubjectPublicKey: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 141, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 141 bytes BitString: + Tag3: + Extensions: - SignatureAlgorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - SequenceHeader: - AsnId: Sequence and SequenceOf types (Universal 16) + LowTag: - AsnLen: Length = 13, LengthOfLength = 0 Length: 13 bytes, LengthOfLength = 0 + Algorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Parameters: Null Value - Sha1WithRSAEncryption: Null Value + AsnNullHeader: - Signature: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 129, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 129 bytes BitString: + TlsRecordLayer: TLS Rec Layer-3 HandShake:

    Read the article

  • Why don't mails show up in the recipient's mailspool?

    - by Jason
    I have postfix dovecot running with local email system on thunderbird. I have two users on by ubuntu, mailuser 1 and mailuser 2 whom i added to thunderbird. Everything went fine, except the users dont have anything on their inbox on thunderbird and sent mails dont get through. Im using maildir as well. Checking /var/log/mail.log reveals this This what is happining: Restarting postfix and dovecot and then sending mail from one user to another user... I believe this line is the problem May 30 18:31:55 postfix/smtpd[12804]: disconnect from localhost[127.0.0.1] Why is it not connecting ? What could be wrong ? /var/log/mail.log May 30 18:30:21 dovecot: imap: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: master: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: imap: Server shutting down. in=467 out=475 May 30 18:30:21 dovecot: config: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: log: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: anvil: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: master: Dovecot v2.2.9 starting up (core dumps disabled) May 30 18:30:54 dovecot: imap-login: Login: user=<mailuser2>, method=PLAIN, rip=::1, lip=::1, mpid=12638, TLS, session=<xUfQkaD66gAAAAAAAAAAAAAAAAAAAAAB> May 30 18:31:04 postfix/master[12245]: terminating on signal 15 May 30 18:31:04 postfix/master[12795]: daemon started -- version 2.11.0, configuration /etc/postfix May 30 18:31:55 postfix/postscreen[12803]: CONNECT from [127.0.0.1]:33668 to [127.0.0.1]:25 May 30 18:31:55 postfix/postscreen[12803]: WHITELISTED [127.0.0.1]:33668 May 30 18:31:55 postfix/smtpd[12804]: connect from localhost[127.0.0.1] May 30 18:31:55 postfix/smtpd[12804]: 1ED7120EB9: client=localhost[127.0.0.1] May 30 18:31:55 postfix/cleanup[12809]: 1ED7120EB9: message-id=<[email protected]> May 30 18:31:55 postfix/qmgr[12799]: 1ED7120EB9: from=<[email protected]>, size=546, nrcpt=1 (queue active) May 30 18:31:55 postfix/local[12810]: 1ED7120EB9: to=<mailuser2@mysitecom>, relay=local, delay=0.03, delays=0.02/0.01/0/0, dsn=2.0.0, status=sent (delivered to maildir) May 30 18:31:55 postfix/qmgr[12799]: 1ED7120EB9: removed May 30 18:31:55 postfix/smtpd[12804]: disconnect from localhost[127.0.0.1] May 30 18:31:55 dovecot: imap-login: Login: user=<mailuser1>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=12814, TLS, session=<sD9plaD6PgB/AAAB> This is my postfix main.cf See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination myhostname = server mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir / mailbox_command= All ports are listening tcp 0 0 *:imaps *:* LISTEN tcp 0 0 *:submission *:* LISTEN tcp 0 0 *:imap2 *:* LISTEN tcp 0 0 s148134.s148134.:domain *:* LISTEN tcp 0 0 192.168.56.101:domain *:* LISTEN tcp 0 0 10.0.2.15:domain *:* LISTEN tcp 0 0 localhost:domain *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 localhost:953 *:* LISTEN tcp6 0 0 [::]:imaps [::]:* LISTEN tcp6 0 0 [::]:submission [::]:* LISTEN tcp6 0 0 [::]:imap2 [::]:* LISTEN tcp6 0 0 [::]:domain [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 [::]:smtp [::]:* LISTEN tcp6 0 0 localhost:953 [::]:* LISTEN

    Read the article

  • Specifying a Postfix Instance to send outbound email

    - by Catherine Jefferson
    I have a CentOS 6.5 server running Postfix 2.6x (the default distribution) with five public IPv4 IPs bound to it. Each IP has DNS and rDNS set separately. Each uses a different hostname at a different domain. I have five Postfix instances, one bound to each IP, like this example: 192.168.34.104 red.example.com /etc/postfix 192.168.36.48 green.example.net /etc/postfix-green 192.168.36.49 pink.example.org /etc/postfix-pink 192.168.36.50 orange.example.info /etc/postfix-orange 192.168.36.51 blue.example.us /etc/postfix-blue I've tested each IP by telneting to port 25. Postfix answers and banners properly with the correct hostname. Email is received on all of these instances with no problems and is routed to the correct place. This setup, minus the final instance, has existed for a couple of years and works. I never bothered to set up outbound email to go through any but the main instance, however; there was no need. Now I need to send email from blue.example.us that actually leaves from that interface and IP, such that the Received headers show blue.example.us as the sending mailhost, so that SPF and DKIM validate, etc etc. The email that will be sent from blue.example.com is a feedback loop sent by a single shell account on the server (account5), an account that is dedicated to sending this email. The account receives the feedback loop emails from servers on other networks, saves the bodies of those emails, and then generates a new outbound email header, appends the saved body, and sends the email. It's sending by piping each email to sendmail -oi -t. We're doing it this way to mask the identities of the initial servers. The procmail script that processes these emails works correctly. However, I cannot configure this account to send email through the proper Postfix instance/IP/interface. The exact same account and script sends email through the main Postfix instance /etc/postfix without any issues. When I change MAIL_CONFIG to point to /etc/postfix-blue in either .bash_profile or the Procmail script that handles this email, though, I get this error: sendmail: fatal: User account5(###) is not allowed to submit mail I've read the manuals on Postfix.org, searched Google, and tried the suggestions in three previous answers here on ServerFault.com: Postfix - specify interface to deliver outbound mail on Postfix user is not allowed to submit mail Postfix rejects php mails I have been careful to stop and restart Postfix after each configuration change, and tested the results. Nothing has worked. The main postfix instance happily accepts outbound email from account5. The postfix-blue instance continues to reject email from account5 with the sendmail error above. As tempting as it is to blame machine hostility, I know that I must be missing something or doing something wrong. Does anybody have any suggestions as to what it might be? Please feel free to ask for further information about my setup if you need it. =-=-=-=-=-=-=-=-=-= At the request of the responder, here are main.cf and master.cf for a) the main postfix instance ("red.example.com") and b) the FBL instance ("blue.example.us") [NOTE: All parameters not specified below were left at the default Postfix 2.6 settings] MAIN: master.cf smtp inet n - n - - smtpd main.cf myhostname = red.example.com mydomain = example.com inet_interfaces = $myhostname, localhost inet_protocols = all lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname, localhost.$mydomain, localhost local_recipient_maps = mynetworks = 192.168.34.104/32 relay_domains = example.com, example.info, example.net, example.org, example.us relayhost = [192.168.34.102] # Separate physical server, main mailserver. relay_recipient_maps = hash:/etc/postfix/relay_recipients alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases smtpd_banner = $myhostname ESMTP $mail_name multi_instance_wrapper = ${command_directory}/postmulti -p -- multi_instance_enable = yes multi_instance_directories = /etc/postfix-green /etc/postfix-pink /etc/postfix-orange /etc/postfix-blue FBL: master.cf 184.173.119.103:25 inet n - n - - smtpd main.cf myhostname = blue.example.us mydomain = blue.example.us <= Deliberately set to subdomain only. myorigin = $mydomain inet_interfaces = $myhostname lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname local_recipient_maps = unix:passwd.byname $alias_maps $virtual_alias_maps mynetworks = 192.168.36.51/32, 192.168.35.20/31 <= Second IP is backup MX servers relay_domains = $mydestination recipient_canonical_maps = hash:/etc/postfix-blue/canonical virtual_alias_maps = hash:/etc/postfix-fbl/virtual alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical mailbox_command = /usr/bin/procmail -a "$EXTENSION" DEFAULT=$HOME/Mail/ MAILDIR=$HOME/Mail smtpd_banner = $myhostname ESMTP $mail_name authorized_submit_users = multi_instance_name = postfix-blue multi_instance_enable = yes

    Read the article

  • Unable to make the session state request to the session state server.

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • got VPN l2l connect between a site & HQ but not traffice using ASA5505 on both ends

    - by vinlata
    Hi, Could anyone see what did I do wrong here? this is one configuration of site1 to HQ on ASA5505, I can get connected but seems like no traffic going (allowed) between them, could it be a NAT issue? any helps would much be appreciated Thanks interface Vlan1 nameif inside security-level 100 ip address 172.30.205.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address pppoe setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 shutdown ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 shutdown ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! passwd .dIuXDIYzD6RSHz7 encrypted ftp mode passive dns server-group DefaultDNS domain-name errg.net object-group network HQ network-object 172.22.0.0 255.255.0.0 network-object 172.22.0.0 255.255.128.0 network-object 172.22.0.0 255.255.255.128 network-object 172.22.1.0 255.255.255.128 network-object 172.22.1.0 255.255.255.0 access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_20_cryptomap extended permit ip 172.30.205.0 255.255.255.0 o bject-group HQ access-list inside_nat0_outbound extended permit ip 172.30.205.0 255.255.255.0 o bject-group HQ access-list policy-nat extended permit ip 172.30.205.0 255.255.255.0 172.22.0.0 255.255.0.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 nat-control global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) 172.30.205.0 access-list policy-nat access-group inside_access_in in interface inside access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute username errgadmin password Os98gTdF8BZ0X2Px encrypted privilege 15 http server enable http 64.42.2.224 255.255.255.240 outside http 172.22.0.0 255.255.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 190 match address outside_20_cryptomap crypto map outside_map 190 set pfs crypto map outside_map 190 set peer 66.7.249.109 crypto map outside_map 190 set transform-set ESP-3DES-SHA crypto map outside_map 190 set phase1-mode aggressive crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 30 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp nat-traversal 190 crypto isakmp ipsec-over-tcp port 10000 tunnel-group 66.7.249.109 type ipsec-l2l tunnel-group 66.7.249.109 ipsec-attributes pre-shared-key * telnet timeout 5 ssh 172.30.205.0 255.255.255.0 inside ssh 172.22.0.0 255.255.0.0 outside ssh 64.42.2.224 255.255.255.240 outside ssh 172.25.0.0 255.255.128.0 outside ssh timeout 5 console timeout 0 management-access inside vpdn group PPPoEx request dialout pppoe vpdn group PPPoEx localname [email protected] vpdn group PPPoEx ppp authentication pap vpdn username [email protected] password ********* dhcpd address 172.30.205.100-172.30.205.131 inside dhcpd dns 172.22.0.133 68.94.156.1 interface inside dhcpd wins 172.22.0.133 interface inside dhcpd domain errg.net interface inside dhcpd enable inside ! ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! end

    Read the article

  • Make exact mp4 (H264) format for uploading to youtube

    - by WHITECOLOR
    With ffmpeg I'm converting video from mp3 and picture to upload it to youtube. After upload, conversion fails. Reasons are unknown. I believe the problem is in format. By the way If I'm uploading file 5 minutes length, it fails if I upload 30 seconds of this file it succeeds. I have donwload mp4 file from youtube. Then I uploaded it, it is done very fast. So a nice solution would be to convert videos to the same format that is done by google. I got the following output by mpeg: ffmpeg version N-44264-g070b0e1 Copyright (c) 2000-2012 the FFmpeg developers built on Sep 7 2012 17:38:57 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runt ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass - -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-l ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenj peg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheo ra --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-li bvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --ena ble-zlib libavutil 51. 72.100 / 51. 72.100 libavcodec 54. 55.100 / 54. 55.100 libavformat 54. 25.105 / 54. 25.105 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 16.100 / 3. 16.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'youtubetrack0.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2012-10-02 22:58:57 Duration: 00:06:46.66, start: 0.000000, bitrate: 176 kb/s Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yu v420p, 450x360, 78 kb/s, 6 fps, 6 tbr, 12 tbn, 12 tbc Metadata: creation_time : 1970-01-01 00:00:00 handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 95 kb/s Metadata: creation_time : 2012-10-02 22:58:57 handler_name : IsoMedia File Produced by Google, 5-11-2011 Is it possible to construct ffmpeg parameters so that that would give the same format that google internally does? Is the information above sufficient? I couldn't construct needed params. For example I don't understand how to set tbn and what 95 kb/s mean in "Stream #0:1(und): Audio:". Now I just do: ffmpeg -i videoimage.jpg -i audio.mp3 video.mp4 Info I've got: ffmpeg version N-44998-gdf82454 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 2 2012 23:03:12 with gcc 4.7.1 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --en able-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-librtmp --enable-libschroedinger - -enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc -- enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enab le-libxavs --enable-libxvid --enable-zlib libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 63.100 / 54. 63.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf54.25.105 Duration: 00:06:46.81, start: 0.000000, bitrate: 129 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p, 450x360, 3392 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 127 kb/s Metadata: handler_name : SoundHandler This video fails the conversion on youtube. I also tried to use other vcode parmam and extensions of output file (mp4, wmv, avi) but failed too. Would be greatful for help.

    Read the article

  • BIND no longer responds to AXFR Requests

    - by djsumdog
    Recently we moved our primary external DNS server. It has three caching DNS slaves in front of it provided by our ISP. They've told us they've started getting access denied requests when doing zone transfers (AXFR). If I add in my own IPs to the allow-transfer list, I also get a transfer failed when using dig with the AXFR argument. Here is what my bind configuration looks like: options { directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; zone-statistics yes; statistics-file "/var/log/named.stats"; listen-on-v6 { any; }; notify-source 10.19.0.68 port 53; querylog yes; notify yes; allow-transfer { 127.0.0.1; //localhost 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; also-notify { 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; include "/etc/named.d/forwarders.conf"; }; logging { channel simple_log { file "/var/log/bind.log" versions 10 size 3m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; channel log_zone_transfers { file "/var/log/axfr.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category xfer-out { log_zone_transfers; }; channel log_notify { file "/var/log/notify.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category notify { log_notify; }; channel queries { file "/var/log/queries.log" versions 10 size 30m; print-time yes; severity info; print-category yes; print-severity yes; }; category queries { queries; }; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; include "/etc/named.conf.include"; zone "example.net " { type master; file "/var/lib/named/master/example.net.hosts"; }; zone "example.com " { type master; file "/var/lib/named/master/example.com.hosts"; }; ## -- other master files -- And the errors in the xfer log look like the following: 29-Oct-2012 14:20:02.806 xfer-out: info: client 1.1.1.1#59069: bad zone transfer request: 'example.com./IN': non-authoritative zone (NOTAUTH) I've tried adding allow-transfer parameters directly on the zone files and still get failed transfers. Any idea what I'm doing wrong?

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Content Management for WebCenter Installation Guide

    - by Gary Niu
    Overvew As we known, there are two way to install Content Management for WebCenter. One way is install it by WebCenter installer wizard, another way is to install it use their own installer. This guide is for the later one. For SSO purpose, I also mentioned how to config OID identity store for Content Management for WebCenter. Content Management for WebCenter( 10.1.3.5.1) Oracle Enterprise Linux R5U4 Basic Installation -bash-3.2$ ./setup.sh Please select your locale from the list.           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Throughout the install, when entering a text value, you can press Enter to accept the default that appears between square brackets ([]). When selecting from a list, you can select the choice followed by an asterisk by pressing Enter. Select installation type from the list.         *1. Install new server          2. Update a server Choice? Content Server Installation Directory Please enter the full pathname to the installation directory. Content Server Core Folder [/oracle/ucm/server]:/opt/oracle/ucm/server Create Directory         *1. yes          2. no Choice? Java virtual machine         *1. Sun Java 1.5.0_11 JDK          2. Specify a custom Java virtual machine Choice? Installing with Java version 1.5.0_11. Enter the location of the native file repository. This directory contains the native files checked in by contributors. Content Server Native Vault Folder [/opt/oracle/ucm/server/vault/]: Create Directory         *1. yes          2. no Choice? Enter the location of the web-viewable file repository. This directory contains files that can be accessed through the web server. Content Server Weblayout Folder [/opt/oracle/ucm/server/weblayout/]: Create Directory         *1. yes          2. no Choice? This server can be configured to manage its own authentication or to allow another master to act as an authentication proxy. Configure this server as a master or proxied server.         *1. Configure as a master server.          2. Configure as server proxied by a local master server. Choice? During installation, an admin server can be installed and configured to manage this server. If there is already an admin server on this system, you can have the installer configure it to administrate this server instead. Select admin server configuration.         *1. Install an admin server to manage this server.          2. Configure an existing admin server to manage this server.          3. Don't configure an admin server. Choice? Enter the location of an executable to start your web browser. This browser will be used to display the online help. Web Browser Path [/usr/bin/firefox]: Content Server System locale           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Please select the region for your timezone from the list.         *1. Use the timezone setting for your operating system          2. Pacific          3. America          4. Atlantic          5. Europe          6. Africa          7. Asia          8. Indian          9. Australia Choice? Please enter the port number that will be used to connect to the Content Server. This port must be otherwise unused. Content Server Port [4444]: Please enter the port number that will be used to connect to the Admin Server. This port must be otherwise unused. Admin Server Port [4440]: Enter a security filter for the server port. Hosts which are allowed to communicate directly with the server port may access any resources managed by the server. Insure that hosts which need access are included in the filter. See the installation guide for more details. Incoming connection address filter [127.0.0.1]:*.*.*.* *** Content Server URL Prefix The URL prefix specified here is used when generating HTML pages that refer to the contents of the weblayout directory within the installation. This prefix must be mapped in the web server Additional Document Directories section of the Content Management administration menu to the physical location of the weblayout directory. For example, "/idc/" would be used in your installation to refer to the URL http://ucm.company.com/idc which would be mapped in the web server to the physical location /oracle/ucm/server/weblayout. Web Server Relative Root [/idc/]: Enter the name of the local mail server. The server will contact this system to deliver email. Company Mail Server [mail]: Enter the e-mail address for the system administrator. Administrator E-Mail Address [sysadmin@mail]: *** Web Server Address Many generated HTML pages refer to the web server you are using. The address specified here will be used when generating those pages. The address should include the host and domain name in most cases. If your webserver is running on a port other than 80, append a colon and the port number. Examples: www.company.com, ucm.company.com:90 Web Server HTTP Address [yekki]:yekki.cn.oracle.com:7777 Enter the name for this instance. This name should be unique across your entire enterprise. It may not contain characters other than letters, numbers, and underscores. Server Instance Name [idc]: Enter a short label for this instance. This label is used on web pages to identify this instance. It should be less than 12 characters long. Server Instance Label [idc]: Enter a long description for this instance. Server Description [Content Server idc]: Web Server         *1. Apache          2. Sun ONE          3. Configure manually Choice? Please select a database from the list below to use with the Content Server. Content Server Database         *1. Oracle          2. Microsoft SQL Server 2005          3. Microsoft SQL Server 2000          4. Sybase          5. DB2          6. Custom JDBC settings          7. Skip database configuration Choice? Manually configure JDBC settings for this database          1. yes         *2. no Choice? Oracle Server Hostname [localhost]: Oracle Listener Port Number [1521]: *** Database User ID The user name is used to log into the database used by the content server. Oracle User [user]:YEKKI_OCSERVER *** Database Password The password is used to log into the database used by the content server. Oracle Password []:oracle Oracle Instance Name [ORACLE]:orcl Configure the JVM to find the JDBC driver in a specific jar file          1. yes         *2. no Choice? The installer can attempt to create the database tables or you can manually create them. If you choose to manually create the tables, you should create them now. Attempt to create database tables          1. yes         *2. no Choice? Select components to install.          1. ContentFolios: Collect related items in folios          2. Folders_g: Organize content into hierarchical folders          3. LinkManager8: Hypertext link management support          4. OracleTextSearch: External Oracle 11g database as search indexer support          5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: 1,2,3,4,5         *1. ContentFolios: Collect related items in folios         *2. Folders_g: Organize content into hierarchical folders         *3. LinkManager8: Hypertext link management support         *4. OracleTextSearch: External Oracle 11g database as search indexer support         *5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: F Checking configuration. . . Configuration OK. Review install settings. . . Content Server Core Folder: /opt/oracle/ucm/server Java virtual machine: Sun Java 1.5.0_11 JDK Content Server Native Vault Folder: /opt/oracle/ucm/server/vault/ Content Server Weblayout Folder: /opt/oracle/ucm/server/weblayout/ Proxy authentication through another server: no Install admin server: yes Web Browser Path: /usr/bin/firefox Content Server System locale: English-US Content Server Port: 4444 Admin Server Port: 4440 Incoming connection address filter: *.*.*.* Web Server Relative Root: /idc/ Company Mail Server: mail Administrator E-Mail Address: sysadmin@mail Web Server HTTP Address: yekki.cn.oracle.com:7777 Server Instance Name: idc Server Instance Label: idc Server Description: Content Server idc Web Server: Apache Content Server Database: Oracle Manually configure JDBC settings for this database: false Oracle Server Hostname: localhost Oracle Listener Port Number: 1521 Oracle User: YEKKI_OCSERVER Oracle Password: 6GP1gBgzSyKa4JW10U8UqqPznr/lzkNn/Ojf6M8GJ8I= Oracle Instance Name: orcl Configure the JVM to find the JDBC driver in a specific jar file: false Attempt to create database tables: no Components: ContentFolios,Folders_g,LinkManager8,OracleTextSearch,ThreadedDiscussions Proceed with install         *1. Proceed          2. Change configuration          3. Recheck the configuration          4. Abort installation Choice? Finished install type Install with warnings at 4/2/10 12:32 AM. Run Scripts -bash-3.2$ ./wc_contentserverconfig.sh /opt/oracle/ucm/server /mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/CS10gR35UpdateBundle.zip' Service 'DELETE_DOC' Extended Service 'DELETE_BYREV_REVISION' Extended Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/ContentAccess/ContentAccess-linux.zip' (internal)      04.02 00:40:38.019      main    updateDocMetaDefinitionV11: adding decimal column Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/Folders_g.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/FusionLibraries.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/JpsUserProvider.zip' Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/WcConfigure.zip' Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Restart Content Server to apply updates. Configuring Apache Web Server append the following lines at httpd.conf: include "/opt/oracle/ucm/server/data/users/apache22/apache.conf" Configuring the Identity Store( Optional ) 1.  Stop Oracle Content Server and the Admin Server 2.  Update the Oracle Content Server's JPS configuration file, jps-config.xml: a. add a service instance <serviceInstance provider="idstore.ldap.provider" name="idstore.oid"> <property name="subscriber.name" value="dc=cn,dc=oracle,dc=com"></property> <property name="idstore.type" value="OID"></property> <property name="security.principal.key" value="ldap.credential"></property> <property name="security.principal.alias" value="JPS"></property> <property name="ldap.url" value="ldap://yekki.cn.oracle.com:3060"></property> <extendedProperty> <name>user.search.bases</name> <values> <value>cn=users,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <extendedProperty> <name>group.search.bases</name> <values> <value>cn=groups,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <property name="username.attr" value="uid"></property> <property name="user.login.attr" value="uid"></property> <property name="groupname.attr" value="cn"></property> </serviceInstance> b. Ensure that the <jpsContext> entry in the jps-config.xml file refers to the new serviceInstance, that is, idstore.oid and not idstore.ldap: <jpsContext name="default"> <serviceInstanceRef ref="idstore.oid"/> 3. Run the new script to setup the credentials for idstore.oid in the credential store: cd CONTENT_SERVER_HOME/custom/FusionLibraries/tools -bash-3.2$ ./run_credtool.sh Buildfile: ./../tools/credtool.xml     [input] skipping input as property action has already been set.     [input] Alias: [JPS]     [input] Key: [ldap.credential]     [input] User Name: cn=orcladmin     [input] Password: welcome1     [input] JPS Config: [/opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml] manage-creds:      [echo] @@@ Help: run 'ant manage-creds' command to see the detailed usage      [java] Using default context in /opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml file for credential store.      [java] Credential store location : /opt/oracle/ucm/server/config      [java] Credential with map JPS key ldap.credential stored successfully!      [java]      [java]      [java]     Credential for map JPS and key ldap.credential is:      [java]             PasswordCredential name : cn=orcladmin      [java]             PasswordCredential password : welcome1 BUILD SUCCESSFUL Total time: 1 minute 27 seconds Testing 1. acces http://yekki.cn.oracle.com:7777/idc 2. login in with OID user, for example: orcladmin/welcome1 3. make sure your JpsUserProvider status is "good"

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >