Daily Archives

Articles indexed Monday August 18 2014

Page 2/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • Puppet: Making Windows Awesome Since 2011

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-making-windows-awesome-since-2011.aspxPuppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks.  It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC. Puppet Labs is pushing the envelope on Windows. Here are several things to note: Puppet x64 Ruby support for Windows coming in v3.7.0. An awesome ACL module (with order, SIDs and very granular control of permissions it is best of any CM). A wealth of modules that work with Windows on the Forge (and more on GitHub). Documentation solely for Windows folks - https://docs.puppetlabs.com/windows. Some of the common learning points with Puppet on Windows user are noted in this recent blog post. Microsoft OpenTech supports Puppet. Azure has the ability to deploy a Puppet Master (http://puppetlabs.com/solutions/microsoft). At Microsoft //Build 2014 in the Day 2 Keynote Puppet Labs CEO Luke Kanies co-presented with Mark Russonivich (http://channel9.msdn.com/Events/Build/2014/KEY02  fast forward to 19:30)! Puppet has a Visual Studio Plugin! It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources! Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs. As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense. You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows. Yes, Puppet on Windows is here to stay and it’s continually getting better folks.

    Read the article

  • O&rsquo;Reilly Deal of the Day 6/August/2014 - Professional C# 5.0 and .NET 4.5.1

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/06/orsquoreilly-deal-of-the-day-6august2014---professional-c-5.0.aspxToday’s half-price deal from O’Reilly at http://shop.oreilly.com/product/9781118833032.do?code=MSDEAL, is Professional C# 5.0 and .NET 4.5.1. “Written by a dream team of .NET experts, Professional C# 5.0 and .NET 4.5.1 includes everything developers need to work with C#, the language of choice for .NET applications. This book is perfect for both experienced C# programmers looking to sharpen their skills and professional developers who are using C# for the first time. The authors deliver unparalleled coverage of Visual Studio 2013 and .NET Framework 4.5.1 additions, as well as new test-driven development and concurrent programming features. Source code for all the examples are available for download, so you can start writing Windows desktop, Windows Store apps, and ASP.NET web applications immediately.”

    Read the article

  • O&rsquo;Reilly offer to 05:00 PT 12/Aug/2014 - Ivor Horton's Beginning Visual C++ 2013

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/05/orsquoreilly-offer-to-0500-pt-12aug2014---ivor-hortons-beginning.aspxUntil 05:00 PT 12/Aug/2014, O’Reilly at http://shop.oreilly.com/product/9781118845714.do?code=WKCPPS are offering 50% off Ivor Horton's Beginning Visual C++ 2013. “This latest edition of the bestselling book on the C++ language follows the proven approach that has made all of Ivor Horton’s C++ books so popular. Horton provides a comprehensive introduction to both the Standard C++ language, and to Visual C++. The book—thoroughly updated for the 2013 release—shows readers how to build real-world applications using Visual C++. No previous programming experience is required. The author uses numerous step-by-step programming examples to guide readers through the ins and outs of C++ development.”

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • O&rsquo;Reilly Deal of the Day 4/Aug/2014 - Windows PowerShell 4.0 for .NET Developers

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/04/orsquoreilly-deal-of-the-day-4aug2014---windows-powershell-4.0.aspxToday’s half-price Deal of the Day from O’Reilly at http://shop.oreilly.com/product/9781849688765.do?code=MSDEAL is Windows PowerShell 4.0 for .NET Developers. “The world of technology is growing faster than ever, and the business needs are getting more complex every day. With PowerShell in your toolbox, you have an object-based scripting language, task-based shell, along with a powerful automation engine. PowerShell is built on top of .NET framework which gives an edge over the other tools when it comes to integration and automation of Microsoft product and technologies.”

    Read the article

  • API Management Video

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2014/08/03/157900.aspxJust wanted to put the word out that the API Management video from the recent user group meeting is available.  The page on the below link has resources from that meeting:http://ukcsug.co.uk/past-events/2014-07-07/ Also we have out next two meetings available for registration at the following links:Hybrid Connectionshttps://www.eventbrite.com/e/azure-biztalk-services-hybrid-connections-tickets-12216617231?aff=eorg Hybrid Integration with Dynamics CRMhttps://www.eventbrite.com/e/hybrid-integration-with-microsoft-dynamics-crm-tickets-12398067955?aff=eorg

    Read the article

  • APress Deal of the Day 3/August/2014 - Beginning Windows 8.1

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/03/apress-deal-of-the-day-3august2014---beginning-windows-8.1.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430263586 is Beginning Windows 8.1. “Beginning Windows 8.1 takes you through the new features and helps you get more out of the familiar to reveal the fullest possibilities for this amazing new operating system.”

    Read the article

  • HPET for x86 BSP (how to build it for WCE8)

    - by Werner Willemsens
    Originally posted on: http://geekswithblogs.net/WernerWillemsens/archive/2014/08/02/157895.aspx"I needed a timer". That is how we started a few blogs ago our series about APIC and ACPI. Well, here it is. HPET (High Precision Event Timer) was introduced by Intel in early 2000 to: Replace old style Intel 8253 (1981!) and 8254 timers Support more accurate timers that could be used for multimedia purposes. Hence Microsoft and Intel sometimes refers to HPET as Multimedia timers. An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit wide. The HPET is discoverable via ACPI. The HPET circuit in recent Intel platforms is integrated into the SouthBridge chip (e.g. 82801) All HPET timers should support one-shot interrupt programming, while optionally they can support periodic interrupts. In most Intel SouthBridges I worked with, there are three HPET timers. TIMER0 supports both one-shot and periodic mode, while TIMER1 and TIMER2 are one-shot only. Each HPET timer can generate interrupts, both in old-style PIC mode and in APIC mode. However in PIC mode, interrupts cannot freely be chosen. Typically IRQ11 is available and cannot be shared with any other interrupt! Which makes the HPET in PIC mode virtually unusable. In APIC mode however more IRQs are available and can be shared with other interrupt generating devices. (Check the datasheet of your SouthBridge) Because of this higher level of freedom, I created the APIC BSP (see previous posts). The HPET driver code that I present you here uses this APIC mode. Hpet.reg [HKEY_LOCAL_MACHINE\Drivers\BuiltIn\Hpet] "Dll"="Hpet.dll" "Prefix"="HPT" "Order"=dword:10 "IsrDll"="giisr.dll" "IsrHandler"="ISRHandler" "Priority256"=dword:50 Because HPET does not reside on the PCI bus, but can be found through ACPI as a memory mapped device, you don't need to specify the "Class", "SubClass", "ProgIF" and other PCI related registry keys that you typically find for PCI devices. If a driver needs to run its internal thread(s) at a certain priority level, by convention in Windows CE you add the "Priority256" registry key. Through this key you can easily play with the driver's thread priority for better response and timer accuracy. See later. Hpet.cpp (Hpet.dll) This cpp file contains the complete HPET driver code. The file is part of a folder that you typically integrate in your BSP (\src\drivers\Hpet). It is written as sample (example) code, you most likely want to change this code to your specific needs. There are two sets of #define's that I use to control how the driver works. _TRIGGER_EVENT or _TRIGGER_SEMAPHORE: _TRIGGER_EVENT will let your driver trigger a Windows CE Event when the timer expires, _TRIGGER_SEMAPHORE will trigger a Windows CE counting Semaphore. The latter guarantees that no events get lost in case your application cannot always process the triggers fast enough. _TIMER0 or _TIMER2: both timers will trigger an event or semaphore periodically. _TIMER0 will use a periodic HPET timer interrupt, while _TIMER2 will reprogram a one-shot HPET timer after each interrupt. The one-shot approach is interesting if the frequency you wish to generate is not an even multiple of the HPET main counter frequency. The sample code uses an algorithm to generate a more correct frequency over a longer period (by reducing rounding errors). _TIMER1 is not used in the sample source code. HPT_Init() will locate the HPET I/O memory space, setup the HPET counter (_TIMER0 or _TIMER2) and install the Interrupt Service Thread (IST). Upon timer expiration, the IST will run and on its turn will generate a Windows CE Event or Semaphore. In case of _TIMER2 a new one-shot comparator value is calculated and set for the timer. The IRQ of the HPET timers are programmed to IRQ22, but you can choose typically from 20-23. The TIMERn_INT_ROUT_CAP bits in the TIMn_CONF register will tell you what IRQs you can choose from. HPT_IOControl() can be used to set a new HPET counter frequency (actually you configure the counter timeout value in microseconds), start and stop the timer, and request the current HPET counter value. The latter is interesting because the Windows CE QueryPerformanceCounter() and QueryPerformanceFrequency() APIs implement the same functionality, albeit based on other counter implementations. HpetDrvIst() contains the IST code. DWORD WINAPI HpetDrvIst(LPVOID lpArg) { psHpetDeviceContext pHwContext = (psHpetDeviceContext)lpArg; DWORD mainCount = READDWORD(pHwContext->g_hpet_va, GenCapIDReg + 4); // Main Counter Tick period (fempto sec 10E-15) DWORD i = 0; while (1) { WaitForSingleObject(pHwContext->g_isrEvent, INFINITE); #if defined(_TRIGGER_SEMAPHORE) LONG p = 0; BOOL b = ReleaseSemaphore(pHwContext->g_triggerEvent, 1, &p); #elif defined(_TRIGGER_EVENT) BOOL b = SetEvent(pHwContext->g_triggerEvent); #else #pragma error("Unknown TRIGGER") #endif #if defined(_TIMER0) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD comparator = READDWORD(pHwContext->g_hpet_va, Tim0_ComparatorReg + 0); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 0); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 0 %06d %08X %08X", pHwContext->g_id, i++, currentCount, comparator)); #elif defined(_TIMER2) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD previousComparator = READDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0); pHwContext->g_counter2.QuadPart += pHwContext->g_comparator.QuadPart; // increment virtual counter (higher accuracy) DWORD comparator = (DWORD)(pHwContext->g_counter2.QuadPart >> 8); // "round" to real value WRITEDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0, comparator); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 2); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 2 %06d %08X %08X (%08X)", pHwContext->g_id, i++, currentCount, comparator, comparator - previousComparator)); #else #pragma error("Unknown TIMER") #endif } return 1; } The following figure shows how the HPET hardware interrupt via ISR -> IST is translated in a Windows CE Event or Semaphore by the HPET driver. The Event or Semaphore can be used to trigger a Windows CE application. HpetTest.cpp (HpetTest.exe)This cpp file contains sample source how to use the HPET driver from an application. The file is part of a separate (smart device) VS2013 solution. It contains code to measure the generated Event/Semaphore times by means of GetSystemTime() and QueryPerformanceCounter() and QueryPerformanceFrequency() APIs. HPET evaluation If you scan the internet about HPET, you'll find many remarks about buggy HPET implementations and bad performance. Unfortunately that is true. I tested the HPET driver on an Intel ICH7M SBC (release date 2008). When a HPET timer expires on the ICH7M, an interrupt indeed is generated, but right after you clear the interrupt, a few more unwanted interrupts (too soon!) occur as well. I tested and debugged it for a loooong time, but I couldn't get it to work. I concluded ICH7M's HPET is buggy Intel hardware. I tested the HPET driver successfully on a more recent NM10 SBC (release date 2013). With the NM10 chipset however, I am not fully convinced about the timer's frequency accuracy. In the long run - on average - all is fine, but occasionally I experienced upto 20 microseconds delays (which were immediately compensated on the next interrupt). Of course, this was all measured by software, but I still experienced the occasional delay when both the HPET driver IST thread as the application thread ran at CeSetThreadPriority(1). If it is not the hardware, only the kernel can cause this delay. But Windows CE is an RTOS and I have never experienced such long delays with previous versions of Windows CE. I tested and developed this on WCE8, I am not heavily experienced with it yet. Internet forum threads however mention inaccurate HPET timer implementations as well. At this moment I haven't figured out what is going on here. Useful references: http://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/software-developers-hpet-spec-1-0a.pdf http://en.wikipedia.org/wiki/High_Precision_Event_Timer http://wiki.osdev.org/HPET Windows CE BSP source file package for HPET in MyBsp Note that this source code is "As Is". It is still under development and I cannot (and never will) guarantee the correctness of the code. Use it as a guide for your own HPET integration.

    Read the article

  • AZURE - Stairway To Heaven

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2014/08/02/azure---stairway-to-heaven.aspx  Before you’ll start reading please start to play this song.   OK boys and girls, time get familiar with clouds. Time to become a meteorologist. To be honest I don’t know how to start. Is cloud better or worse than on campus resources … hmm … it is just different. I think for successful adoption in cloud world IT Dinosaurs need to forget some “Private Cloud” virtualization bad habits, and learn new way of thinking. Take a look: - I don’t need any  tapes or  CDs  (Physical Kingdom of Windows XP and 2000) - I don’t need any locally stored MP3s (CD virtualization :-) - I can just stream music to your computer no matter whether my on-site infrastructure is powered on. Why not to do exactly the same with WebServer, SQL, or just rented for a while Windows server ? Let’s go, to the other side of the mirror. 1st  - register yourself for free one month trial, as happy MSDN subscriber you’ve got monthly budget to spent. In addition in default setting your limit protects you against loosing real money, if your toys will consume too much traffic and space. http://azure.microsoft.com/en-us/pricing/free-trial/ Once your account is ready forget WebPortal, we are PowerShell knights. http://go.microsoft.com/?linkid=9811175&clcid=0x409 #Authenticate yourself in Azure Add-AzureAccount #download once your settings file Get-AzurePublishSettingsFile #Import it to your PowerShell Module Import-AzurePublishSettingsFile "C:\Azure\[filename].publishsettings" #validation Get-AzureAccount Get-AzureSubscription #where are Azure datacenters Get-AzureLocation #You will need it Update-Help #storage account is related to physical location, there are two datacenters on each continent, try nearest to you # all your VMs will store VHD files on your storage account #your storage account must be unique globally, so I assume that words account or server are already used New-AzureStorageAccount -StorageAccountName "[YOUR_STORAGE_ACCOUNT]" -Label "AzureTwo" -Location "West Europe" Get-AzureStorageAccount #it looks like you are ready to deploy first VM, what templates we can use Get-AzureVMImage | Select ImageName #what a mess, let’s choose Server 2012 $ImageName = (Get-AzureVMImage)[74].ImageName $cloudSvcName = '[YOUR_STORAGE_ACCOUNT]' $AdminUsername = "[YOUR-ADMIN]" $adminPassword = '[YOUR_PA$$W0RD]' $MediaLocation = "West Europe" $vmnameDC = 'DC01' #burn baby burn !!! $vmDC01 = New-AzureVMConfig -Name $vmnameDC -InstanceSize "Small" -ImageName $ImageName   `     | Add-AzureProvisioningConfig -Windows -Password $adminPassword -AdminUsername $AdminUsername   `     | New-AzureVM -ServiceName $cloudSvcName #ice, ice baby … Get-AzureVM Get-AzureRemoteDesktopFile -ServiceName "[YOUR_STORAGE_ACCOUNT]" -Name "DC01" -LocalPath "c:\AZURE\DC01.rdp" As you can see it is not just a new-VM, you need to associate your VM with AzureVMConfig (it sets your template), AzureProvisioningConfig (it sets your customizations), and Storage account. In next releases you’ll need to put this machine in specific subnet, attach a HDD and many more. After second reading I found that I am using the same name for STORAGE and SERVICE account, please be aware of it if you need to split these values. Conclusions: - pipe rules ! - at the beginning it is hard to change your mind and agree with fact that it is easier to remove and recreate a VM than move it to different subnet - by default everything is firewalled, limited access to DNS, but NATed outside on custom ports. It is good to check these translations sometimes on the webportal. - if you remove your VMs your harddrives remains on storage and MS will charge you . Remove-AzureVM -DeleteVHD For me AZURE it is a lot of fun, once again I can be newbie and learn every page. For me Azure offers real freedom in deployment of VMs without arguing with NetAdmins, WinAdmins, DBAs, PMs and other Change Managers. Unfortunately soon or later they will come to my haven and change it into …

    Read the article

  • APress Deal of the Day 2/August/2014 - Pro ASP.NET MVC 5

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/02/apress-deal-of-the-day-2august2014---pro-asp.net-mvc.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430265290 is Pro ASP.NET MVC 5. “The ASP.NET MVC 5 Framework is the latest evolution of Microsoft’s ASP.NET web platform. It provides a high-productivity programming model that promotes cleaner code architecture, test-driven development, and powerful extensibility, combined with all the benefits of ASP.NET.”

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • Cannot login to zabbix web portal

    - by hlx98007
    I've managed to install Zabbix22-server on CentOS 6.x along with php-fpm and nginx. I can view the page of 127.0.0.1 but I can only see this: After clicking the "Login" button, the page is the same: What can I do to make it work as expected, so that I can login as admin? Here are some confs: nginx_zabbix.conf: server { listen 80; add_header X-Frame-Options "SAMEORIGIN"; access_log /var/log/nginx/zabbix.log; error_log /var/log/nginx/zabbix.err.log; client_max_body_size 500M; # This folder is a soft link to /usr/share/zabbix # the permssion has been set to nginx:nginx recursively. root /var/www/zabbix; location / { index index.html index.htm index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; fastcgi_param PATH_INFO $path_info; } } php-fpm is using its default values, with permission user/group set to nginx (rather than apache) Folder /var/lib/php/session has been set to nginx:nginx with permission 770. SELinux is set to disabled. I've restarted everything up to this point.

    Read the article

  • My smtp server is spammed?

    - by Milos
    I have a server and the postfix client on it. Since several days, I noticed a lot of processes running there. When checked, there are a lot of emails sent. Here is an example from the mail log: Aug 18 11:54:56 mem postfix/smtpd[9963]: connect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/smtpd[9301]: connect from unknown[186.113.45.4] Aug 18 11:54:56 mem postfix/smtpd[9963]: 525E7114012D: client=dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/cleanup[9970]: 525E7114012D: message-id=<B55835C9027BFA9D16CCBB556DB2F48BB82DF004000480BA-db0c3ce8aa74446411898d0d2feb3001@email.filmforthoughtinc.com> Aug 18 11:54:56 mem postfix/qmgr[2581]: 525E7114012D: from=<[email protected]>, size=10702, nrcpt=1 (queue active) Aug 18 11:54:56 mem postfix/smtpd[9301]: EC52711401DC: client=unknown[186.113.45.4] Aug 18 11:54:57 mem postfix/smtpd[9963]: disconnect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:57 mem postfix/cleanup[8597]: EC52711401DC: message-id=<4C905D97606B436FE50C6F738DE014D9D84F2185BA815D81-1a4dbe6fc2bfcc8183f5faf901cfa15e@email.manguerasespecializadas.com> Aug 18 11:54:57 mem postfix/smtp[9971]: 525E7114012D: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.55/0/0.45/0.16, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:57 mem postfix/cleanup[10067]: 8B1E11140268: message-id=<[email protected]> Aug 18 11:54:57 mem postfix/bounce[10001]: 525E7114012D: sender non-delivery notification: 8B1E11140268 Aug 18 11:54:57 mem postfix/qmgr[2581]: 8B1E11140268: from=<>, size=12693, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/qmgr[2581]: 525E7114012D: removed Aug 18 11:54:57 mem postfix/qmgr[2581]: EC52711401DC: from=<[email protected]>, size=10978, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/smtp[10013]: connect to aspmx.l.google.com[2607:f8b0:400d:c03::1b]:25: Network is unreachable Aug 18 11:54:57 mem postfix/smtpd[9301]: disconnect from unknown[186.113.45.4] Aug 18 11:54:58 mem postfix/smtp[10013]: 8B1E11140268: to=<[email protected]>, relay=aspmx.l.google.com[74.125.22.26]:25, delay=0.5, delays=0.06/0/0.28/0.16, dsn=5.1.1, status=bounced (host aspmx.l.google.com[74.125.22.26] said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 l7si24621420qad.26 - gsmtp (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/qmgr[2581]: 8B1E11140268: removed Aug 18 11:54:58 mem postfix/smtp[9971]: EC52711401DC: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.66/0/0.44/0.12, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/cleanup[9970]: 414361140254: message-id=<[email protected]> Aug 18 11:54:58 mem postfix/bounce[10001]: EC52711401DC: sender non-delivery notification: 414361140254 Aug 18 11:54:58 mem postfix/qmgr[2581]: 414361140254: from=<>, size=13057, nrcpt=1 (queue active) Aug 18 11:54:58 mem postfix/qmgr[2581]: EC52711401DC: removed Aug 18 11:55:01 mem postfix/smtp[10002]: 414361140254: to=<[email protected]>, relay=manguerasespecializadas.com[99.198.96.210]:25, delay=2.9, delays=0.04/0/2.1/0.84, dsn=2.0.0, status=sent (250 OK id=1XJPGs-0007BE-OI) Aug 18 11:55:01 mem postfix/qmgr[2581]: 414361140254: removed IS my server attacked, spammed? How to check that? Thank you.

    Read the article

  • tmux: Suddenly, cannot horizontally split

    - by A__A__0
    As root, using a reasonably default .profile and .shrc and an empty tmux.conf, I am unable to split the window horizontally. There are a number of cases to consider so I'll list them clearly. Using the keybinding + empty configuration: nothing happens Using the keybinding + my configuration: a bell is generated, nothing else; occasionally, the split will appear and disappear immediately (maybe it always does this, but I'm connecting over ssh so it may not make it through) Using tmux split-window -h with any config: tmux immediately exits I've posted here in order the server and client verbose logs generated by tmux -v during the third case: server started, pid 9523 socket path /tmp/tmux-0/default new client 7 got 100 from client 7 got 101 from client 7 got 102 from client 7 got 103 from client 7 got 104 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 106 from client 7 got 200 from client 7 cmdq 0x801c6e080: new-session (client 7) new term: xterm xterm override: XT xterm override: Ms ]52;%p1%s;%p2%s xterm override: Cs ]12;%p1%s xterm override: Cr ]112 xterm override: Ss [%p1%d q xterm override: Se [2 q new key Oo: 0x1021 (KP/) new key Oj: 0x1022 (KP*) new key Om: 0x1023 (KP-) new key Ow: 0x1024 (KP7) new key Ox: 0x1025 (KP8) new key Oy: 0x1026 (KP9) new key Ok: 0x1027 (KP+) new key Ot: 0x1028 (KP4) new key Ou: 0x1029 (KP5) new key Ov: 0x102a (KP6) new key Oq: 0x102b (KP1) new key Or: 0x102c (KP2) new key Os: 0x102d (KP3) new key OM: 0x102e (KPEnter) new key Op: 0x102f (KP0) new key On: 0x1030 (KP.) new key OA: 0x101d (Up) new key OB: 0x101e (Down) new key OC: 0x1020 (Right) new key OD: 0x101f (Left) new key [A: 0x101d (Up) new key [B: 0x101e (Down) new key [C: 0x1020 (Right) new key [D: 0x101f (Left) new key OH: 0x1018 (Home) new key OF: 0x1019 (End) new key [H: 0x1018 (Home) new key [F: 0x1019 (End) new key Oa: 0x501d (C-Up) new key Ob: 0x501e (C-Down) new key Oc: 0x5020 (C-Right) new key Od: 0x501f (C-Left) new key [a: 0x901d (S-Up) new key [b: 0x901e (S-Down) new key [c: 0x9020 (S-Right) new key [d: 0x901f (S-Left) new key [11^: 0x5002 (C-F1) new key [12^: 0x5003 (C-F2) new key [13^: 0x5004 (C-F3) new key [14^: 0x5005 (C-F4) new key [15^: 0x5006 (C-F5) new key [17^: 0x5007 (C-F6) new key [18^: 0x5008 (C-F7) new key [19^: 0x5009 (C-F8) new key [20^: 0x500a (C-F9) new key [21^: 0x500b (C-F10) new key [23^: 0x500c (C-F11) new key [24^: 0x500d (C-F12) new key [25^: 0x500e (C-F13) new key [26^: 0x500f (C-F14) new key [28^: 0x5010 (C-F15) new key [29^: 0x5011 (C-F16) new key [31^: 0x5012 (C-F17) new key [32^: 0x5013 (C-F18) new key [33^: 0x5014 (C-F19) new key [34^: 0x5015 (C-F20) new key [2^: 0x5016 (C-IC) new key [3^: 0x5017 (C-DC) new key [7^: 0x5018 (C-Home) new key [8^: 0x5019 (C-End) new key [6^: 0x501a (C-NPage) new key [5^: 0x501b (C-PPage) new key [11$: 0x9002 (S-F1) new key [12$: 0x9003 (S-F2) new key [13$: 0x9004 (S-F3) new key [14$: 0x9005 (S-F4) new key [15$: 0x9006 (S-F5) new key [17$: 0x9007 (S-F6) new key [18$: 0x9008 (S-F7) new key [19$: 0x9009 (S-F8) new key [20$: 0x900a (S-F9) new key [21$: 0x900b (S-F10) new key [23$: 0x900c (S-F11) new key [24$: 0x900d (S-F12) new key [25$: 0x900e (S-F13) new key [26$: 0x900f (S-F14) new key [28$: 0x9010 (S-F15) new key [29$: 0x9011 (S-F16) new key [31$: 0x9012 (S-F17) new key [32$: 0x9013 (S-F18) new key [33$: 0x9014 (S-F19) new key [34$: 0x9015 (S-F20) new key [2$: 0x9016 (S-IC) new key [3$: 0x9017 (S-DC) new key [7$: 0x9018 (S-Home) new key [8$: 0x9019 (S-End) new key [6$: 0x901a (S-NPage) new key [5$: 0x901b (S-PPage) new key [11@: 0xd002 (C-S-F1) new key [12@: 0xd003 (C-S-F2) new key [13@: 0xd004 (C-S-F3) new key [14@: 0xd005 (C-S-F4) new key [15@: 0xd006 (C-S-F5) new key [17@: 0xd007 (C-S-F6) new key [18@: 0xd008 (C-S-F7) new key [19@: 0xd009 (C-S-F8) new key [20@: 0xd00a (C-S-F9) new key [21@: 0xd00b (C-S-F10) new key [23@: 0xd00c (C-S-F11) new key [24@: 0xd00d (C-S-F12) new key [25@: 0xd00e (C-S-F13) new key [26@: 0xd00f (C-S-F14) new key [28@: 0xd010 (C-S-F15) new key [29@: 0xd011 (C-S-F16) new key [31@: 0xd012 (C-S-F17) new key [32@: 0xd013 (C-S-F18) new key [33@: 0xd014 (C-S-F19) new key [34@: 0xd015 (C-S-F20) new key [2@: 0xd016 (C-S-IC) new key [3@: 0xd017 (C-S-DC) new key [7@: 0xd018 (C-S-Home) new key [8@: 0xd019 (C-S-End) new key [6@: 0xd01a (C-S-NPage) new key [5@: 0xd01b (C-S-PPage) new key [I: 0x1031 ((null)) new key [O: 0x1032 ((null)) new key OP: 0x1002 (F1) new key OQ: 0x1003 (F2) new key OR: 0x1004 (F3) new key OS: 0x1005 (F4) new key [15~: 0x1006 (F5) new key [17~: 0x1007 (F6) new key [18~: 0x1008 (F7) new key [19~: 0x1009 (F8) new key [20~: 0x100a (F9) new key [21~: 0x100b (F10) new key [23~: 0x100c (F11) new key [24~: 0x100d (F12) new key [2~: 0x1016 (IC) new key [3~: 0x1017 (DC) replacing key OH: 0x1018 (Home) replacing key OF: 0x1019 (End) new key [6~: 0x101a (NPage) new key [5~: 0x101b (PPage) new key [Z: 0x101c (BTab) replacing key OA: 0x101d (Up) replacing key OB: 0x101e (Down) replacing key OD: 0x101f (Left) replacing key OC: 0x1020 (Right) spawn: /bin/sh -- session 0 created writing 207 to client 7 got 208 from client 7 input_parse: '#' ground input_parse: ' ' ground keys are 7 ([?1;2c) received service class 1 complete key [?1;2c 0xfff keys are 1 (t) complete key t 0x74 input_parse: 't' ground keys are 1 (m) complete key m 0x6d input_parse: 'm' ground keys are 1 (u) complete key u 0x75 input_parse: 'u' ground keys are 1 (x) complete key x 0x78 input_parse: 'x' ground keys are 1 ( ) complete key 0x20 input_parse: ' ' ground keys are 1 (s) complete key s 0x73 input_parse: 's' ground keys are 1 (p) complete key p 0x70 input_parse: 'p' ground keys are 1 (l) complete key l 0x6c input_parse: 'l' ground keys are 1 (i) complete key i 0x69 input_parse: 'i' ground keys are 1 (t) complete key t 0x74 input_parse: 't' ground keys are 1 (-) complete key - 0x2d input_parse: '-' ground keys are 1 (d) complete key d 0x64 input_parse: 'd' ground keys are 1 () complete key 0x7f input_parse: '' ground input_c0_dispatch: ' input_parse: '' ground input_parse: '[' esc_enter input_parse: 'K' csi_enter input_csi_dispatch: 'K' "" "" keys are 1 (w) complete key w 0x77 input_parse: 'w' ground keys are 1 (i) complete key i 0x69 input_parse: 'i' ground keys are 1 (n) complete key n 0x6e input_parse: 'n' ground keys are 1 (d) complete key d 0x64 input_parse: 'd' ground keys are 1 (o) complete key o 0x6f input_parse: 'o' ground keys are 1 (w) complete key w 0x77 input_parse: 'w' ground keys are 1 ( ) complete key 0x20 input_parse: ' ' ground keys are 1 (-) complete key - 0x2d input_parse: '-' ground keys are 1 (h) complete key h 0x68 input_parse: 'h' ground keys are 1 ( ) complete key 0xd input_parse: ' ' ground input_c0_dispatch: ' input_parse: ' ' ground input_c0_dispatch: ' new client 13 got 100 from client 13 got 101 from client 13 got 102 from client 13 got 103 from client 13 got 104 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 106 from client 13 got 200 from client 13 cmdq 0x801c6e160: split-window -h (client 13) spawn: /bin/sh -- writing 203 to client 13 input_parse: '#' ground input_parse: ' ' ground input_parse: '#' ground input_parse: ' ' ground lost client 13 session 0 destroyed writing 203 to client 7 got 205 from client 7 writing 204 to client 7 lost client 7 got 207 from server got 203 from server got 204 from server There are some other peculiarities: With a newly created user (from which I overwrote root's .profile and .shrc, tmux works perfectly. Occasionally (twice out of the 50 or so times I've tested it), the splitting will work fine once in a session. (This happened for example when I ran ktrace on tmux, which I can also post) To explain the 'suddenly' part of the title: when I started my newly updated mysql56-server, tmux immediately exited and lost the session. Recently I changed architectures, from FreeBSD 10.0 i386 to amd64, and I am still working through shared library incompatibilities. I suspect that this could be involved, but I can't imagine how an incompatibility of this sort could result in such a specific, isolated failure.

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • Apache not directing to correct VHost

    - by BANANENMANNFRAU
    I have setup the following virtual host ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com DocumentRoot /var/www/homepage/public_html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined When I hit my url Apache still shows the default page. Not the index Ive created in the give Document root. In my Domain i have set the A Record to the Ip of my VPS: apache2ctl -S: output: VirtualHost configuration: *:80 is a NameVirtualHost default server xxxxxx.stratoserver.net (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost xxxxxxx.stratoserver.net (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost mysite.com (/etc/apache2/sites-enabled/homepage.conf:1) alias www.mysite.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults Mutex watchdog-callback: using_defaults PidFile: "/var/run/apache2/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used How would I need to setup my Virtual host so that apache shows the correct site depending on the Domain im redirecting from.

    Read the article

  • MySQL permission errors

    - by dotancohen
    It seems that on a Ubuntu 14.04 machine the user mysql cannot access anything. It is not writing logs nor reading files. Witness: - bruno():mysql$ cat /etc/passwd | grep mysql mysql:x:116:127:MySQL Server,,,:/nonexistent:/bin/false - bruno():mysql$ sudo mysql_install_db Installing MySQL system tables... 140818 18:16:50 [ERROR] Can't read from messagefile '/usr/share/mysql/english/errmsg.sys' 140818 18:16:50 [ERROR] Aborting 140818 18:16:50 [Note] Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. ...boilerplate trimmed... - bruno():mysql$ ls -la /usr/share/mysql/english/errmsg.sys -rw-r--r-- 1 root root 59535 Jul 29 13:40 /usr/share/mysql/english/errmsg.sys - bruno():mysql$ wc -l /usr/share/mysql/english/errmsg.sys 16 /usr/share/mysql/english/errmsg.sys Here we have seen that mysql cannot read /usr/share/mysql/english/errmsg.sys even though the permissions are open to read it, and in fact the regular login user can read the file (with wc). Additionally, MySQL is not writing any logs: - bruno():mysql$ ls -la /var/log/mysql total 8 drwxr-s--- 2 mysql adm 4096 Aug 18 16:10 . drwxrwxr-x 18 root syslog 4096 Aug 18 16:10 .. What might cause this user to not be able to access anything? What can I do about it?

    Read the article

  • DNS record reappears after having been deleted

    - by palmbardier
    I've got a Microsoft Windows Server 2003 R2 acting as a domain controller for a small network. It provides DHCP and DNS among a few other services. It's only got a single NIC, but it's configured with two IP addresses. I want it's name to resolve to one of the two IP addresses assigned to its NIC. I've unchecked "Register this connection's addresses in DNS" under the "Advanced TCP/IP Settings". Currently we've got two distinct DNS Host (A) records for this domain controller: dc-001 - 10.0.0.1 dc-001 - 10.0.100.1 I've deleted the first entry but it continues to reappear in my dnsmgmt snap-in. Unfortunately I'm not a Microsoft systems administrator by trade. Does anyone familiar with Microsoft server environments know why a deleted Host (A) record would reappear? Is there another check-box I need to toggle? Thanks in advance to all of you Microsoft experts out there.

    Read the article

  • What's causing "shutdown state" after TFTP reloaded Cisco `running-config` on 871?

    - by xtian
    Cisco CCP Write Configuration borked my 871w config while I was trying to setup port forwarding. I tested the 871's flash memory with fsck and rewrote the minimal config for TFTP (which is the same for Cisco's CCP app.). Thne, I successfully uploaded a previously working running-config from Win Vista using SolarWinds TFTP Server, unfortunately the restore was not entirely successful. The old running config was saved to the 871's startup-config and I can login using console port. Some other things that are working are the hostname and welcome message but that's about it. Startup shows an error SETUP: new interface NVI0 placed in "shutdown" state after tftp. The missing light on the access point modem for ethernet link show the 871'a outside FE4 is not working. SO...what's the possible problem with reloading a previously working config (approximately 4 months with the same config) via TFTP? Is there something I can look for on the 871 to verify the config?

    Read the article

  • ESXi onboard SATA passthrough [on hold]

    - by lodge93
    I am running ESXi 5.5 on a home system. When I attempt to pass one of the onboard SATA controllers (Intel ICH10R) to a VM, it is successful. However, the controller never appears in the VM. What else am I missing that would cause a VM to not recognize the controller? Additional Info: The controller is set to AHCI mode in the BIOS Prior to setting up the controller for passthrough, ESXi did recognize the controller and the drives I have tried with multiple VMs, none of which recognized the controller or the attached drives I have two onboard SATA controllers, on controller for ESXi and one for passthrough The final ideal state is that I could passthrough the controller to a FreeNAS VM

    Read the article

  • Fetch new Mails (Also from Subfolders) from another IMAP server as new Mail in Postfix

    - by Tobi
    everyone. I have installed Postfix on a server with Aliases and Domains from a MySQL Database. It is configured to forward some adresses to other Mail Accounts and also delivers some mails in local mailboxes that will be queried over a dovecot imap server. For this example let there be two users: [email protected] what is a user that gets its mail just forwarded to let's say [email protected] [email protected] what is a user that accesses its mail from local IMAP. Now, I want to fetch some Mails from another mailserver and handle them as if they were sent to a user of my Mailserver. Lets say those corelations exist: [email protected] has two external accounts: [email protected] and [email protected] [email protected] has also one external account [email protected] The Problem is the new mails on that other Mailserver is not always in the inbox, it might be in subdirectories: mailinglists/all or mailinglists/it but also in mailinglists/some-other-department which is not interesting and should not be delivered. I already found a programm called fetchmail but I cannot find how to fetch subdirectories or decide which subdirectories are fetched.

    Read the article

  • What happens when I delete the first of several AWS EBS snapshots?

    - by Jepper
    On http://aws.amazon.com/ebs/pricing/ it says: "EBS Snapshots [...] For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. For each incremental snapshot, only the changed part of your Amazon EBS volume is saved." I intend to snapshot some of my instances daily and keep snapshots for 7 days after which the snapshots are destroyed. What happens when, eventually, I destroy the first snapshot? Will the subsequent snapshots be worthless given the first is no longer available?

    Read the article

  • Bash Script to Compress / Transfer / Remove Log Files

    - by Jason
    I am currently using chronolog to set log file names for Apache with date. They are in the following format: /WEB/LOGS/APACHE_ACCESS_YYYY-MM-DD.log /WEB/LOGS/APACHE_ERROR_YYYY-MM-DD.log I would like to have a script that runs on the first of every month and compresses the log files from the previous month, transfers them to another host (via SCP) and then deletes the compressed file. find . -name '*.log' -mtime +1 -type f I've found several examples like the one above that allow you to select files x days old, but I need all files from the previous month. I am the first to admit my bash scripting skills are weak so would really appreciate any help and guidance.

    Read the article

  • How can I control WiFi clients that, authenticated with radius server (FreeRADIUS) and Active directory [on hold]

    - by Debian
    In order to Authenticate WiFi clients, I have FreeRadius server that works with Active Directory. My question is that, Now all users in Active Directory can connect to WLAN. How can I control them with FreeRradius server. I mean now all people can connect to network and I can not control them. Honestly I don't know how can I control them. FreeRadius was installed on CentOS 6.5 and I don't have mysql. Thanks,

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >