Search Results

Search found 12423 results on 497 pages for 'get proactive customer adoption team'.

Page 232/497 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • C# sends SQL data 4 times less from one box than from another

    - by Bobb
    W2003, .NET 3.5, SQL 2008 I have prod and UAT app servers deployed in 2 different data centres. I have a C# app which reads text file, parse the text and sends the data to the SQL in bulk. SQL server is in US and the app servers are in London (but in different places). All POPs have dedicated network connections. There is no public internet involved. When the app runs on UAT server I can see in Perfmon that the Send byte/sec is x4 higher than from production server. My estimate is that one server outputs at 1 MB/s and the other at 250 KB/s rate. My suspicion immediately is that there is a router on one of the DCs which shapes traffic or does QoS limitation on traffice from London to US. However support and Windows team and networkig team all are saying that there are no differences in neither networking config on the 2 DCs nor NIC config on the 2 app server... How to find out why is the networking bottlneck is 4 times tighter in one place than in the other? What can I do about it?

    Read the article

  • Problem running “Central Administration” website after windows update at Windows 2003 Server Standar

    - by Magdy Roshdy
    I was have WSS 2.0 and then I upgraded to WSS 3.0 and the old instalation database was SQL 2000, now I have another SQL Server instance called:server_name\MICROSOFT##SSEE . After upgrade every thing works fine and our team started to use the portal and we sent lot of documents and make lot of activities on it. The problem started after installing Windows updates the website suddenly stopped and giving me an error "Cannot connect to the configuration database" If I tried to open SharePoint Products and Technologies Configuration Wizard it is gives me a strange error says: "An exception of type Microsoft.SharePoint.PostSetupConfiguration.PostSetupConfigurationTaskException was thrown. Additional exception information: SharePoint Products and Technologies cannot be configured. The current installation mode does not support SKU to SKU upgrades because there exists an older version of Windows SharePoint Services that must be upgraded first " At this post:http://stackoverflow.com/questions/114398/iis-error-cannot-connect-to-the-configuration-database/249494#249494 the guy of the second answer have the same problem and he suggested a solution but I don't understand well. I tried as he suggested to make the identity of the app pool of the SharePoint web site as "IWAM_server_name " after that the error changed as he said and I web site give me "Server Application Unavailable " and when checked the Event Viewer at the server I found that ASP.NET 2.0 give this exception: "Could not load file or assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Access is denied ." and I don't know how to solve this problem. I'm really want to make my web site working because our team really need these documents and its stuff. I hope I will find some one to help me.

    Read the article

  • Rescue system running TFS that BSODs, into vmware esxi

    - by 3molo
    Hi, After moving to new facilities, one of our old Dell servers running Windows Server 2003 R2 on PowerEdge 2650 HW BSODs with 0x8e. The server runs Team Foundation Server, so we have a few guys dependent on it. No one here knows TFS, so we have no idea how difficult it would be to setup from scratch. We have the MSSQL database(s) backed up, recent and fresh copy. Tried removing/refitting memory modules, but with no success. The system boots into safe mode but hangs occasionally. I booted a linux livecd and did a dd of both c: and d:, so I have all the data in compressed images on a vmware machine. For the guest, I created a 38G (actually it became 40GB) partition to act as C:, and booted a live cd. I then uncompressed the compressed disk image of c: and dd'd it to the new c: using 'gunzip -dc c.img.gz | dd of=/dev/sda1 bs=1M'. The operation ran for about 1000 seconds, and completed successfully. I assumed it would at least try to boot windows (but most likely BSOD due to not having correct drivers), but the Vmware ESXi guest does not seem to recognize it as a bootable disk. We don't have the vmware enterprise license, so the vmware converter cold cloning is not an option. Did I do something wrong in my dd's etc with the ISOs, or why would it not (try to) boot? Am I wasting my time? What other approach is there? Will continue to try to remove services and drivers to make the physical machine at least work reasonably well in safe mode. What do you suggest? 1. Continue to get the dd'd images to the virtual disk and get it to boot. 2. Install a new windows server, get team foundation server and restore from backup. 3. Focus on the old problematic hardware Any help appreciated

    Read the article

  • Best SSD tweaks for Windows 7

    - by Nick Berardi
    I have seen many articles about tweaking an SSD, but many of them seem outdated, or too broad (read all Windows XP, Vista, and Windows 7 general tweaks). And I know that Windows 7 has been specifically tweaked for SSD by the Windows team, so I don't want to do something that was written for Windows XP in mind and end up circumventing something the Windows team has specifically designed in to Windows 7. So my question is what are the best SSD tweaks for Windows 7 that you have found to get the performance out of your drive? I hope to make a comprehensive list in the answers below so there won't be so much disinformation in the forums about what to do and what not to do. Here are a few that I see posted up on the forums alot, and some questions to get the discussion started: Disabling Superfetch. Yes or No Disabling Page File or limiting it to a really small size such as 500 MB. Disabling Indexing. Yes or No Disabling Defragmenting. Yes or No What are your thoughts do you have any that have worked for you? When providing an answer please do your best to back it up with a reason and possibly some documentation from MSDN, TechNet, or another credible source.

    Read the article

  • Sending same email through two different accounts on different domains using Outlook 2010

    - by bot
    I am a programmer and don't have experience in Outlook configurations. Our company has two email domains namely xyz.com and xyz.biz. Each employee has an email id on one of these domains but not both depending on the project they are working on. The problem we are facing is that when a communication email is sent from the Accounts, HR, Admin, etc departments, they need to send the email twice. Once through the xyz.com account to all employees with an email address on xyz.com and once through xyz.biz to all employees with an email address on xyz.biz. I am not sure why they have to send two separate emails but the IT team has directed all departments to do so as there is no other solution according to them. Even though two different groups have been created, sending an email to employees in a group of xyz.biz from xyz.com does not seem to work. I want to know if Outlook provides a feature such that we can configure some kind of rules to send an email through an id on xyz.com to all users on xyz.com and the same email gets sent automatically to users on xyz.biz through an id on xyz.biz. The only technical details I know is that we are using Exchange 2003 and the IT team claims that this is a limitation causing the problem. Edit: Our company is split into two main divisions depending on the type of projects. I am pretty sure I use domain XYZ wheras the employees in the other division use the doman ABC to log in into the windows machine or outlook itself. Also, employees in domain XYZ can access the machines on the network in domain ABC but not the other way around

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • 64-bit Windows 7 gets stuck on logo screen on bootup

    - by Richard B
    I've had a PC running Windows 7 in my office which I'm not using at the moment (cause I'm working elsewhere as a consultant atm), I'm only accessing the PC using Team Viewer (http://www.teamviewer.com/) which means the PC has been running for quite some time now. I've restarted it maybe twice a week though. A few days ago I couldn't access it using Team Viewer and when I got to the office the screen was black with only the mouse pointer showing. The PC has four hard disks, three of them (all 1Tb) is using RAID 5. This is what I've done so far: I reboot and everything seems to load correctly. I get to a screen that gives me two choices - boot Windows normally or perform a startup repair. Choosing to boot Windows only gets me to the Windows 7 logo screen which only animates over and over again. Choosing to repair gets me to the repair screen that "checks for problems" and then it gets stuck on the "Attempting repairs..."-screen (I let it run for about 24 hours before giving up). What is the next step to take? I don't have any backups and no system restore points saved. I can access files and folders through a terminal window using a Windows 7 DVD so I guess nothing is lost yet... Please help me, thanks!

    Read the article

  • Trouble with Remote Desktop pulling through printers. Drive Redirection works, and the ports created but not the printers

    - by Windex
    I've run out of things to look into. All the support documents have been gone through and still provide no resolution. I've checked the service permissions, (sc sdshow spooler) they all match up with other systems and what is output on the support documents. I'm nearly positive that the issue can't be permissions anyway as the software requires all users to be an administrator, so all users are a local administrator. (I haven't looked into why yet but its on the list, I was just recently brought into this team and we've put procedures in place for quick recovery.) We've applied hot fixes relating to RDS and printing, though I'm not sure which ones they were. I've combed through group policy and no where is printer redirection disabled. It's setup with all default values regarding the use and redirection of printers and a quick install of W2k8 R2 shows that it works by default. This dev install was joined to the same domain, placed in the same OU, shows the same policies applied, etc, etc, etc, The server generates all the correct redirected ports but no printers are created. It will also redirect drives without issue, this would seem to rule out the usermode service that handles redirects being broken. No events are logged related to any of the events and there are no events from the TerminalServices-Printer source. There were local printers setup. I didn't think it would mattter but as I was running out of ideas I tried deleting them all with no change. The TS was configured for the software it will be running before we checked out the redirection of printers so the other team responsible to setting up new servers wants to find a fix instead of reloading a new server. I'm not sure where or what else to look for. Any ideas?

    Read the article

  • Unexpected behaviour in a Lotus Notes programmable table

    - by Mark B
    I'm designing a workflow database in Lotus Notes 6.0.3 (soon upgrading to 8.5), and my OS is Windows XP. I have recently tried converting a tabbed table into a programmable one. This was so that I could control which tab was displayed to the user when it was opened, so that they were presented with the most appropriate one for that document's progress through the workflow. That part of it works! One of the tabs features a radio button that controls visibility of the next tab, and a pair of cascading dialogue boxes. One contains the static list "Person":"Team", and the other has a formula based on the first: view:=@If(PeerReview = "Team"; "GroupNames"; "GroupMembers"); @Unique(@DbColumn(""; ""; view; 1)) The dialogue boxes have the property "Refresh fields on keyword change" selected. The behaviour that I wasn't expecting is this. If the radio button is set to "Yes" and a value is selected in one of the dialogue boxes, the table opens the next tab. If the radio button is set to "No" and a value is selected in one of the dialogue boxes, the entire table is hidden. I can duplicate the latter by switching off the "Refresh fields on keyword change" property on the dialogue boxes and instead pressing F9 after selecting a value. I have no idea why the former occurs, though. The table is called "RFCInfo", and I have a field on the form called "$RFCInfo" which is editable, hidden from all users who aren't me and initially set by a Postopen script, which I can post if necessary - it's essentially a Select Case statement that looks at a particular item value and returns the name of the table row relating to that value. Can anyone offer any pointers?

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • How to train users converting from PC to Mac/Apple at a small non profit?

    - by Everette Mills
    Background: I am part of a team that provides volunteer tech support to a local non profit. We are in the position to obtain a grant to update almost all of our computers (many of them 5 to 7 year old machines running XP), provide laptops for users that need them, etc. We are considering switching our users from PC (WinXP) to Macs. The technical aspects of switching will not be an issue for the team. We are in the process of planning data conversions, machine setup, server changes, etc regardless of whether we switch to Macs or much newer PCs. About 1/4 of the staff uses or has access to a Mac at home, these users already understand the basics of using the equipment. We have another set of (generally younger) users that are technically savvy and while slightly inconvenienced and slowed for a few days should be able to switch over quickly. Finally, several members of the staff are older and have many issues using there computers today. We think in the long run switching to Macs may provide a better user experience, fewer IT headaches, and more effective use of computers. The questions we have is what resources and training (webpages, Books, online training materials or online courses) do you recommend that we provide to users to enable the switchover to happen smoothly. Especially, with a focus on providing different levels of training and support to users with different skill levels. If you have done this in your own organization, what steps were successful, what areas were less successful?

    Read the article

  • Best SSD tweak for Windows 7

    - by Nick Berardi
    I have seen many articles about tweaking an SSD, but many of them seem outdated, or too broad (read all Windows XP, Vista, and Windows 7 general tweaks). And I know that Windows 7 has been specifically tweaked for SSD by the Windows team, so I don't want to do something that was written for Windows XP in mind and end up circumventing something the Windows team has specifically designed in to Windows 7. So my question is what are the best SSD tweaks for Windows 7 that you have found to get the performance out of your drive? I hope to make a comprehensive list in the answers below so there won't be so much disinformation in the forums about what to do and what not to do. Here are a few that I see posted up on the forums alot: Disabling Superfetch. Yes or No Disabling Page File or limiting it to a really small size such as 500 MB. Disabling Indexing. Yes or No Disabling Defragmenting. Yes or No What are your thoughts do you have any that have worked for you? When providing an answer please do your best to back it up with a reason and possibly some documentation from MSDN, TechNet, or another credible source.

    Read the article

  • In Windows 7 is there a way to login from any user account and see the same workspace and be able to use the running programs of another user?

    - by WickedMongoose
    Our group has a number of Test Stands with PCs that are currently being accessed with a single group login. It has been sent from on high that this is not the way to do things for security reasons and we all agree. However. Multiple team members from around the world log into these Test Stands and need to be able to access programs that have been run from what would be different user profiles if we were to no longer have a single common login. Is there a way to have a common workspace such that when different users login, they will be able to see and interact with all running applications as if they were using a common login? Applications that we run link to and monopolize hardware resources connected to the PC and it is time consuming to restart and reload settings every time a new user logs in. Even if the program did not monopolize the hardware many of these programs are resource intensive and require a large portion of each machine's RAM to run, so trying to run the application again when it is already running from multiple user accounts would quickly consume all system resources. Simple Example: I open a chrome browser while logged into our pc. I then logout and another team member remotes in and should be able to see my open browser and be able to interact with it as if he were the one who opened it. Any alternative process flows or solutions from someone who has gone through a similar transition would be appreciated. This is not a request for how to give all users access to the ability to run a program, but it is the request for how to allow all users access to interact with running applications that have been started by other users and need to be interacted with as if the new user started and has control of the application.

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SOA Community Newsletter: nouvelle lettre !

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTERAUGUST 2012 Dear SOA partner community member Have you submitted your feedback on SOA Partner Community Survey 2012? This is the last chance to participate in the survey. We recommend you to complete the survey and help us to improve our SOA Community. Thanks to all attendees and trainers for their participation in the excellent Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for the great feedback and the nice reports provided by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked, if you did not get a chance or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald Learn Advanced ADF online available. The Link Consulting Team became experts in SOA Governance with EAMS and Oracle Enterprise Repository! We always encourage our community members to share their best practices and are very keen to publish it. Please let us know if you want to share your best practices through this medium.We encourage you to make use of the Specialization benefits - this month we are giving an opportunity to Promote Your SOA & BPM Events. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Presentations & Training material OFM Summer CampsPromote Your SOA & BPM Events Advanced ADF Online, For Free By Grant BPM 11g Customer Stories & Solution Catalog & Process Accelerators Delivering SOA Governance with EAMS by Link Consulting Team WebLogic Server Provisioning and Patching News from our Partners & CommunityUpdated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum SOA Workspace PRESENTATIONS & TRAINING MATERIAL OFM SUMMER CAMPS Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspaceand WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages:www.facebook.com/WebLogicCommunity &www.facebook.com/soacommunity or Picasa AlbumThanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on@soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. BPM Course Quotes “Its always easy, if you know, what you are doing” - Torsten Winterberg, Opitz“ The best ideas are the ideas from the best” - Filipe Sequeria, Primesoft “Best invest in the education in the last 12 months” - Richard Schaller, IPT “Practice best practice with the best instructor” - Graham Lamond Capgemini “If you have basic BPM knowledge, this is the course to really mater it” - Diogo Henriques Link Consulting “Very good trainers lot of work. Lot of fun as well” - Matthias Gris Workflow Factory “If you like to accelerate in Oracle come to the training to bring it all together” - Marcel van der Glind, Amis ADF Course Quotes "Excellent training, great opportunity to network!" - Frank Houweling, Amis "Lots of fun and good ideas" - Ana Santiago, GFI "Learn ADF, worth it Fusion Apps is the future" - Miguel Delgadillo, STO Consulting "The best way to learn Fusion Middleware from the #1" Alexandro Montantes, STO Consulting "Be advanced to to be the first” - Dimitar Petrov Fadata "Great opportunity to suck all the knowledge out of some very experienced product managers” - Wilfred von der Deijl, The Future Group WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas“ "Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase PROMOTE YOUR SOA & BPM EVENTS The Partner Event Publisher has just been made available to all SOA & BPM specialized partners in EMEA. Partners now have the opportunity to publish their events to theOracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. See the demo below and click here to read more information. ADVANCED ADF ONLINE, FOR FREE BY GRANT The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. Back to top BPM 11G CUSTOMER STORIES & SOLUTION CATALOG & PROCESS ACCELERATORS Stories Everyone loves a good story on planning or implementing a BPM strategy. Everyone wants to hear how it was done before?, what worked?, what was achieved? If you have achieved success with BPM, we are very keen to hear your stories and examples of how your customers use it. We receive lots of requests from people who are thinking of using BPM to solve a specific problem or in combination with a specific technology to talk to someone who has done it before. These stories are invaluable. Drop down the details of anything you think is relevant with a bit of detail and we will follow up on it. As one good deed deserves another, we will do our best to give you stories if you need them to show that where you are going, others have treaded before. Send your stories to us using this e-mail link and we will share them among other like minded people. Solution Catalogue This summer, Oracle is launching a solution catalogue specifically intended for partners. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we ware keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this e-mail link and we will make sure to add your solution. Process AcceleratorsFinally if you have specific processes that you are expert on, you have implemented at a customer and you want to work with us on getting these productised, then we would love to know about it. The process accelerator programme is explained in the most recent SOA/BPM Community Newsletter but again feel free to contact us if you want to get involved. Good luck with BPM and let us know how we can help. Barry O'Reilly Director BPM [email protected] DELIVERING SOA GOVERNANCE WITH EAMS BY LINK CONSULTING TEAM In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. WEBLOGIC SERVER PROVISIONING AND PATCHING For access to the Oracle demo systems please visit OPN and talk to your Partner Expert.SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page.Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo.

    Read the article

  • Sendmail Failing to Forward Locally Addressed Mail to Exchange Server

    - by DomainSoil
    I've recently gained employment as a web developer with a small company. What they neglected to tell me upon hire was that I would be administrating the server along with my other daily duties. Now, truth be told, I'm not clueless when it comes to these things, but this is my first rodeo working with a rack server/console.. However, I'm confident that I will be able to work through any solutions you provide. Short Description: When a customer places an order via our (Magento CE 1.8.1.0) website, a copy of said order is supposed to be BCC'd to our sales manager. I say supposed because this was a working feature before the old administrator left. Long Description: Shortly after I started, we had a server crash which required a server restart. After restart, we noticed a few features on our site weren't working, but all those have been cleaned up except this one. I had to create an account on our server for root access. When a customer places an order, our sites software (Magento CE 1.8.1.0) is configured to BCC the customers order email to our sales manager. We use a Microsoft Exchange 2007 Server for our mail, which is hosted on a different machine (in-house) that I don't have access to ATM, but I'm sure I could if needed. As far as I can tell, all other external emails work.. Only INTERNAL email addresses fail to deliver. I know this because I've also tested my own internal address via our website. I set up an account with an internal email, made a test order, and never received the email. I changed my email for the account to an external GMail account, and received emails as expected. Let's dive into the logs and config's. For privacy/security reasons, names have been changed to the following: domain.com = Our Top Level Domain. email.local = Our Exchange Server. example.com = ANY other TLD. OLDadmin = Our previous Server Administrator. NEWadmin = Me. SALES@ = Our Sales Manager. Customer# = A Customer. Here's a list of the programs and config files used that hold relevant for this issue: Server: > [root@www ~]# cat /etc/centos-release CentOS release 6.3 (final) Sendmail: > [root@www ~]# sendmail -d0.1 -bt < /dev/null Version 8.14.4 ========SYSTEM IDENTITY (after readcf)======== (short domain name) $w = domain (canonical domain name) $j = domain.com (subdomain name) $m = com (node name) $k = www.domain.com > [root@www ~]# rpm -qa | grep -i sendmail sendmail-cf-8.14.4-8.e16.noarch sendmail-8.14-4-8.e16.x86_64 nslookup: > [root@www ~]# nslookup email.local Name: email.local Address: 192.168.1.50 hostname: > [root@www ~]# hostname www.domain.com /etc/mail/access: > [root@www ~]# vi /etc/mail/access Connect:localhost.localdomain RELAY Connect:localhost RELAY Connect:127.0.0.1 RELAY /etc/mail/domaintable: > [root@www ~]# vi /etc/mail/domaintable # /etc/mail/local-host-names: > [root@www ~]# vi /etc/mail/local-host-names # /etc/mail/mailertable: > [root@www ~]# vi /etc/mail/mailertable # /etc/mail/sendmail.cf: > [root@www ~]# vi /etc/mail/sendmail.cf ###################################################################### ##### ##### DO NOT EDIT THIS FILE! Only edit the source .mc file. ##### ###################################################################### ###################################################################### ##### $Id: cfhead.m4,v 8.120 2009/01/23 22:39:21 ca Exp $ ##### ##### $Id: cf.m4,v 8.32 1999/02/07 07:26:14 gshapiro Exp $ ##### ##### setup for linux ##### ##### $Id: linux.m4,v 8.13 2000/09/17 17:30:00 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: no_default_msa.m4,v 8.2 2001/02/14 05:03:22 gshapiro Exp $ ##### ##### $Id: smrsh.m4,v 8.14 1999/11/18 05:06:23 ca Exp $ ##### ##### $Id: mailertable.m4,v 8.25 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: virtusertable.m4,v 8.23 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: redirect.m4,v 8.15 1999/08/06 01:47:36 gshapiro Exp $ ##### ##### $Id: always_add_domain.m4,v 8.11 2000/09/12 22:00:53 ca Exp $ ##### ##### $Id: use_cw_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: use_ct_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: access_db.m4,v 8.27 2006/07/06 21:10:10 ca Exp $ ##### ##### $Id: blacklist_recipients.m4,v 8.13 1999/04/02 02:25:13 gshapiro Exp $ ##### ##### $Id: accept_unresolvable_domains.m4,v 8.10 1999/02/07 07:26:07 gshapiro Exp $ ##### ##### $Id: masquerade_envelope.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: masquerade_entire_domain.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: proto.m4,v 8.741 2009/12/11 00:04:53 ca Exp $ ##### # level 10 config file format V10/Berkeley # override file safeties - setting this option compromises system security, # addressing the actual file configuration problem is preferred # need to set this before any file actions are encountered in the cf file #O DontBlameSendmail=safe # default LDAP map specification # need to set this now before any LDAP maps are defined #O LDAPDefaultSpec=-h localhost ################## # local info # ################## # my LDAP cluster # need to set this before any LDAP lookups are done (including classes) #D{sendmailMTACluster}$m Cwlocalhost # file containing names of hosts for which we receive email Fw/etc/mail/local-host-names # my official domain name # ... define this only if sendmail cannot automatically determine your domain #Dj$w.Foo.COM # host/domain names ending with a token in class P are canonical CP. # "Smart" relay host (may be null) DSemail.local # operators that cannot be in local usernames (i.e., network indicators) CO @ % ! # a class with just dot (for identifying canonical names) C.. # a class with just a left bracket (for identifying domain literals) C[[ # access_db acceptance class C{Accept}OK RELAY C{ResOk}OKR # Hosts for which relaying is permitted ($=R) FR-o /etc/mail/relay-domains # arithmetic map Karith arith # macro storage map Kmacro macro # possible values for TLS_connection in access map C{Tls}VERIFY ENCR # who I send unqualified names to if FEATURE(stickyhost) is used # (null means deliver locally) DRemail.local. # who gets all local email traffic # ($R has precedence for unqualified names if FEATURE(stickyhost) is used) DHemail.local. # dequoting map Kdequote dequote # class E: names that should be exposed as from this host, even if we masquerade # class L: names that should be delivered locally, even if we have a relay # class M: domains that should be converted to $M # class N: domains that should not be converted to $M #CL root C{E}root C{w}localhost.localdomain C{M}domain.com # who I masquerade as (null for no masquerading) (see also $=M) DMdomain.com # my name for error messages DnMAILER-DAEMON # Mailer table (overriding domains) Kmailertable hash -o /etc/mail/mailertable.db # Virtual user table (maps incoming users) Kvirtuser hash -o /etc/mail/virtusertable.db CPREDIRECT # Access list database (for spam stomping) Kaccess hash -T<TMPF> -o /etc/mail/access.db # Configuration version number DZ8.14.4 /etc/mail/sendmail.mc: > [root@www ~]# vi /etc/mail/sendmail.mc divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # /etc/mail/make dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # define(`SMART_HOST', `email.local')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES', `True')dnl define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /etc/pki/tls/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /etc/pki/tls/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # MASQUERADE_AS(`domain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # FEATURE(masquerade_entire_domain)dnl dnl # MASQUERADE_DOMAIN(domain.com)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl /etc/mail/trusted-users: > [root@www ~]# vi /etc/mail/trusted-users # /etc/mail/virtusertable: > [root@www ~]# vi /etc/mail/virtusertable [email protected] [email protected] [email protected] [email protected] /etc/hosts: > [root@www ~]# vi /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.50 email.local I've only included the "local info" part of sendmail.cf, to save space. If there are any files that I've missed, please advise so I may produce them. Now that that's out of the way, lets look at some entries from /var/log/maillog. The first entry is from an order BEFORE the crash, when the site was working as expected. ##Order 200005374 Aug 5, 2014 7:06:38 AM## Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: from=OLDadmin, size=11091, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 07:06:39 www sendmail[26150]: s75C6dXe026150: from=<[email protected]>, size=11257, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: [email protected],=?utf-8?B?dGhvbWFzICBHaWxsZXNwaWU=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71091, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75C6dXe026150 Message accepted for delivery) Aug 5 07:06:40 www sendmail[26152]: s75C6dXe026150: to=<[email protected]>,<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=161257, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) This next entry from maillog is from an order AFTER the crash. ##Order 200005375 Aug 5, 2014 9:45:25 AM## Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: from=OLDadmin, size=11344, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: [email protected], ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: from=<[email protected]>, size=11500, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: to==?utf-8?B?S2VubmV0aCBCaWViZXI=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm1030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: s75EjQ4P030021: DSN: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: to=OLDadmin, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=42368, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: from=<>, size=12368, class=0, nrcpts=0, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: s75EjQ4Q030021: return to sender: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm5030022: from=<>, size=14845, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4Q030021: to=postmaster, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=43392, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm5030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30025]: s75EjQm5030022: to=root, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=45053, dsn=2.0.0, stat=Sent Aug 5 09:45:27 www sendmail[30024]: s75EjQm1030022: to=<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=131500, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) To add a little more, I think I've pinpointed the actual crash event. ##THE CRASH## Aug 5 09:39:46 www sendmail[3251]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[3260]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[29370]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:39:47 www sendmail[29372]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee23t029466 Message accepted for delivery) Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee2wh029467 Message accepted for delivery) Aug 5 09:40:06 www sm-msp-queue[29370]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sendmail[29372]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sm-msp-queue[29888]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:40:06 www sendmail[29890]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: s75Ee6xY029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xY029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: s75Ee6xZ029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xZ029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Something to note about the maillog's: Before the crash, the msgid included localhost.localdomain; after the crash it's been domain.com. Thanks to all who take the time to read and look into this issue. I appreciate it and look forward to tackling this issue together.

    Read the article

  • Configure BL-C111A IP Camera

    - by csmba
    I am an ATT DSL customer. I want the camera to send an email when motion detection is triggered. Can anyone tell me how he managed to do that using: GMail (I cannot because I don't think it supports SSL) other alternatives if GMail is not supported

    Read the article

  • Business Owners - What Remote Desktop Solution Do You Use To Service Your Clients PCs?

    - by Sootah
    Howdy fellow computer geeks, I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. So - Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost? Thanks in advance for your help! -Sootah

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • Best Laptop for Student?

    - by George Stocker
    I'm looking for a laptop to buy that meets the following criteria: Company with good warranty and Technical support/Customer Service Less than $1000 for laptop. Laptop does not need to be a powerhouse, but it needs to have good bang for the buck (Not a celeron processor, at least 2GB memory, etc) Laptop will be used for school related activities (Homework, classwork, general computer usage). This laptop is for a friend of mine, a new college student.

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • Directory listing through FTPS (TLS) is not working

    - by Aron Rotteveel
    We recently switched our server to require TLS for every connection. This is working flawlessly so far, but one of our clients is having problems. Some facts: Server uses Pure-FTPD Server has a passive port range configured Server has no firewall limitations regarding the FTP Client uses WS FTP Client is behind a router Client connects to the same IP as every other, using PASSIVE mode All other clients have no trouble connecting Because of the TLS requirement, connecting using ACTIVE mode is almost not possible, but PASSIVE is working fine for everyone except this specific client. It seems that he is able to connect, but once a LIST command is performed, things go wrong. Log: Finding Host <clienthost> ... Connecting to <serverip:21> Connected to <serverip:21> in 0.020000 seconds, Waiting for Server Response Initializing SSL Session ... 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 5 of 50 allowed. 220-Local time is now 22:14. Server port: 21. 220-This is a private system - No anonymous login 220-IPv6 connections are also welcome on this server. 220 You will be disconnected after 15 minutes of inactivity. AUTH TLS 234 AUTH TLS OK. SSL session NOT set for reuse SSL Session Started. Host type (1): Automatic Detect USER <user> 331 User <user> OK. Password required PASS (hidden) 230-User <user> has group access to: <user> 230 OK. Current restricted directory is / SYST 215 UNIX Type: L8 Host type (2): Unix (Standard) PBSZ 0 200 PBSZ=0 PROT P 200 Data protection level set to "private" PWD 257 "/" is your current location CWD /public_html 250 OK. Current directory is /public_html PWD257 "/public_html" is your current location TYPE A 200 TYPE is now ASCII PASV 227 Entering Passive Mode (<serverip>,132,100) connecting data channel to <serverip>:132,100(33892) Substituting connection address <serverip> for private address <serverip> from PASV Using external address <customer ext. ip> instead of local address <customer int. ip> for PORT command PORT 82,161,56,225,195,181 200 PORT command successful LIST Error reading response from server. It appears that the connection is dead. Attempting reconnect... Any help is appreciated.

    Read the article

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >