Search Results

Search found 1818 results on 73 pages for 'steve ellis'.

Page 5/73 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Kill a 10 minute old zombie process in linux bash script

    - by Steve
    I've been tinkering with a regex answer by yukondude with little success. I'm trying to kill processes that are older than 10 minutes. I already know what the process IDs are. I'm looping over an array every 10 min to see if any lingering procs are around and need to be killed. Anybody have any quick thoughts on this? Thanks, Steve ps -eo uid,pid,etime 3233332 | egrep ' ([0-9]+-)?([0-9]{2}:?){3}' | awk '{print $2}' | xargs -I{} kill {} I've been tinkering with the answer posted by yukondude with little success. I'm trying to kill processes that are older than 10 minutes. I already know what the process IDs are. I'm looping over an array every 10 min to see if any lingering procs are around and need to be killed. Anybody have any quick thoughts on this? Thanks, Steve

    Read the article

  • Searches (and general querying) with HBase and/or Cassandra (best practices?)

    - by alexeypro
    I have User model object with quite few fields (properties, if you wish) in it. Say "firstname", "lastname", "city" and "year-of-birth". Each user also gets "unique id". I want to be able to search by them. How do I do that properly? How to do that at all? My understanding (will work for pretty much any key-value storage -- first goes key, then value) u:123456789 = serialized_json_object ("u" as a simple prefix for user's keys, 123456789 is "unique id"). Now, thinking that I want to be able to search by firstname and lastname, I can save in: f:Steve = u:384734807,u:2398248764,u:23276263 f:Alex = u:12324355,u:121324334 so key is "f" - which is prefix for firstnames, and "Steve" is actual firstname. For "u:Steve" we save as value all user id's who are "Steve's". That makes every search very-very easy. Querying by few fields (properties) -- say by firstname (i.e. "Steve") and lastname (i.e. "l:Anything") is still easy - first get list of user ids from "f:Steve", then list from "l:Anything", find crossing user ids, an here you go. Problems (and there are quite a few): Saving, updating, deleting user is a pain. It has to be atomic and consistent operation. Also, if we have size of value limited to some value - then we are in (potential) trouble. And really not of an answer here. Only zipping the list of user ids? Not too cool, though. What id we want to add new field to search by. Eventually. Say by "city". We certainly can do the same way "c:Los Angeles" = ..., "c:Chicago" = ..., but if we didn't foresee all those "search choices" from the very beginning, then we will have to be able to create some night job or something to go by all existing User records and update those "c:CITY" for them... Quite a big job! Problems with locking. User "u:123" updates his name "Alex", and user "u:456" updates his name "Alex". They both have to update "f:Alex" with their id's. That means either we get into overwriting problem, or one update will wait for another (and imaging if there are many of them?!). What's the best way of doing that? Keeping in mind that I want to search by many fields? P.S. Please, the question is about HBase/Cassandra/NoSQL/Key-Value storages. Please please - no advices to use MySQL and "read about" SELECTs; and worry about scaling problems "later". There is a reason why I asked MY question exactly the way I did. :-)

    Read the article

  • IUSR account and SCCM 2007 R3 agent

    - by steve schofield
    I recently started working with SCCM and rolling the agent out with machine having IIS 7.x installed.  I ran into issues where the SCCM agent wouldn't install.  The errors mostly were 0x80004005 and 1603, another key one I found was Return Value 3 in the SCCM setup log.  During the troubleshooting, I found a cool utility called WMI Diag  WMI diag is a VBS script that reads the local WMI store and helps diagnose issue.  Anyone working with SMS or SCCM should keep this handy tool around.  The good thing my particular case WMI was healthy.  The issue turned out I changed the Anonymous Authentication module from using the IUSR account to inherit Application Pool identity.  Once we temporarily switched back to IUSR, installed the agent, then switched the setting back to inherit application pool identity, the SCCM agent installed with no issues. I'm not sure why switching back to the IUSR account solved my issue, if I find out I'll update the post.  More information on IIS 7 builtin accounts http://learn.iis.net/page.aspx/140/understanding-built-in-user-and-group-accounts-in-iis-7 Specify an application pool identity  http://technet.microsoft.com/en-us/library/cc771170(WS.10).aspx SCCM resources (Config Mgr Setup  / Deployment forums) http://social.technet.microsoft.com/Forums/en-US/configmgrsetup/threads http://www.myitforum.com (the best independent SCCM community resource) Hope this helps. Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • Webfarm and IIS configuration tips/tricks

    - by steve schofield
    I was recently talking with some good friends about tips for performance and what an IIS Administrator could do on the server side.  I also see this question from time to time in the forums @ http://forums.iis.net.    Of course, you should test individual settings in a controlled environment while performing load testing before just implementing on your production farm.  IIS Compression enabled (both static and dynamic if possible, set it to 9)  If you are running IIS 6, check this article out by Scott Forsyth. Run FRT for long running pages (Failed Request Tracing) Sql Connection pooling in code Look at using PAL with performance counters ( http://blogs.iis.net/ganekar/archive/2009/08/12/pal-performance-analyzer-with-iis.aspx )  Look at load testing using visual studio load testing tools Log parser finding long running pages.  Here is a couple examples Look at CPU, Memory and disk counters.  Make sure the server has enough resources. Same machineKey account across all same nodes Localize content vs. using UNC based content on a single server (My UNC tag with great posts) Content expiration ETAG’s the same across all web-farms Disable Scalable Networking Pack Use YSlow or Developer tools in Chrome to help measure the client experience improvements. Additionally, some basic counters in for measuring applications is: I would recommend checking out the Chapter 17 in IIS 7 Resource kit. it was one of the chapters I authored. :) Concurrent Connections,  Request Per / Sec, Request Queued.  I strongly suggest testing one change at a time to see how it helps improve your performance.  Hopefully this post provides a few options to review in your environment.   Cheers, Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • Role of Microsoft certifications ADO.Net, ASP.Net, WPF, WCF and Career?

    - by Steve Johnson
    I am a Microsoft fan and .Net enthusiast. I want to align my career in the lines of current and future .Net technologies. I have an MCTS in ASP.Net 3.5. The question is about the continuation of certifications and my career growth and maybe a different job! I want to keep pace with future Microsoft .Net technologies. My current job however doesn't allow so.So i bid to do .Net based certifications to stay abreast with latest .Net technologies. My questions: What certifications should i follow next? I have MCTS .Net 3.5 WPF(Exam 70-502) and MCTS .Net 3.5 WCF(Exam 70-504) in my mind so that i can go for Silverlight development and seek jobs related to Silverlight development. What other steps i need to take in order to develop professional expertise in technologies such as WPF, WCF and Silverlight when my current employer is reluctant to shift to latest .Net technologies? I am sure that there are a lot of people of around here who are working with .Net technologies and they have industrial experience. I being a new comer and starter in my career need to take right decision and so i am seeking help from this community in guiding me to the right path. Expert replies are much appreciated and thanks in advance. Best Regards Steve.

    Read the article

  • Can I trust the Basic schedule equation?

    - by Steve Campbell
    I've been reading Steve McConnell's demystifying the black art of estimating book, and he gives an equation for estimating nominal schedule based on Person-months of effort: ScheduleInMonths = 3.0 x EffortInMonths ^ (1/3) Per the book, this is very accurate (within 25%), although the 3.0 factor above varies depending on your organization (typically between 2 and 4). It is supposedly easy to use historical projects in your organization to derive an appropriate factor for your use. I am trying to reconcile the equation against Agile methods, using 2-6 week cycles which are often mini-projects that have a working deliverable at the end. If I have a team of 5 developers over 4 weeks (1 month), then EffortInMonths = 5 Person Months. The algorithm then outputs a schedule of 3.0 x 5^(1/3) = 5 months. 5 months is much more than 25% different than 1 month. If I lower the 3.0 factor to 0.6, then the algorthim works (outputs a schedule of approx 1 month). The lowest possible factor mentioned in the book through is 2.0. Whats going on here? I want to trust this equation for estimating a "traditional" non-agile project, but I cannot trust it when it does not reconcile with my (agile) experience. Can someone help me understand?

    Read the article

  • Announcing IIS Community Newsletter

    - by steve schofield
    I'm excited to announce a newsletter for the IIS community is available.  Here is the link to signup.  The goal is to cover all happenings in the IIS community.  The goal is to have a monthly newsletter.  Sign-up for IIS Community Newsletterhttp://www.iislogs.com/newsletter/ A little history.  l previously was involved with Brett Hill and authoring the IIS Answers newsletter for about 1 year.  The newsletter literally reached thousands of IIS administrators providing up-to-date Microsoft and community related information.  It was an excellent source of information.  That is my goal with the IIS Community Newsletter. With everything, there is always some "geeking" that goes into it.  I'm using www.StarDeveloper.com newsletter application.   For a modest $10.00 fee, the application provides a simple, yet powerful way to manage newsletters.  I added a CAPTCHA feature to sign-up.  The CAPTCHA module I used was provided free by MONDOR software.  I personally never used one on a application and it was easy to implement.  Thanks for sharing!  Cheers, Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • Good Laptop .NET Developer VM Setup

    - by Steve Brouillard
    I was torn between putting this question on this site or SuperUsers. I've tried to do a good bit of searching on this, and while I find plenty of info on why to go with a VM or not, there isn't much practical advise on HOW to best set things up. Here's what I currently HAVE: HP EliteBook 1540, quad-core, 8GB memory, 500GB 7200 RPM HD, eSATA port. Descent machine. Should work just fine. Windows 7 64-bit Host OS. This also acts as my day-to-day basic stuff (email, Word Docs, etc...) OS. VMWare Desktop Windows 7 64-bit Guest OS with all my .NET dev tools, frameworks, etc loaded on it. It's configured to use 2 cores and up to 6GB of memory. I figure that the dev env will need more than email, word, etc... So, this seemed like a good option to me, but I find with the VM running, things tend to slow down all around on both the host and guest OS. Memory and CPU utilization don't seem to be an issue, but I/O does. I tried running the VM on an external eSATA drive, figuring that the extra channel might pick up the slack. Things only got worse (could be my eSATA enclosure). So, for all of that I have basically two questions in one. Has anyone used this sort of setup and are there any gotchas either around the VMWare configuration or anything else I may have missed here that you can point me to? Is there another option that might work better? For example, I've considered trying a lighter weight Host OS and run both of my environments as VMs? I tried this with Server 2008 Hyper-V, but I lose too much laptop functionality going this route, so I never completed setup. I'm not averse to Linux as a host OS, though I'm no Linux expert. If I'm missing any critical info, feel free to ask. Thanks in advance for your help. Steve

    Read the article

  • Certificate Authentication

    - by steve.mccall1
    Hi, I am currently working on deploying a website for staff to use remotely and would like to make sure it is secure. I was thinking would it be possible to set up some kind of certificate authentication where I would generate a certificate and install it on their laptop so they could access the website? I don't really want them to generate the certificates themselves though as that could easily go wrong. How easy / possible is this and how do I go about doing it? Thanks, Steve

    Read the article

  • Word 2007 jumplist missing

    - by Steve
    Hi, For some reason the jumplist showing recent documents on my pc has gone. The link is still pinned to the taskbar but the list shows no documents. How can I get the jumplist to show recent documents? Thanks, Steve

    Read the article

  • Winnipeg VS.NET 2010 Launch Event Rolls On&hellip;

    - by D'Arcy Lussier
    We’re into the afternoon sessions at the Winnipeg VS.NET launch event! After Steve Porter does his magic on “What’s New for Teams with VS.NET 2010” I’ll be tag-teaming with my colleague Jason Klassen on ASP.NET and VS.NET 2010. Popcorn and prizes are coming up! Miguel Carrasco from Anvil Digital speaking to the masses. Steve starting in on What’s New for Teams in VS.NET 2010.

    Read the article

  • Gradle Support in NetBeans IDE 7.2

    - by Geertjan
    Russel Winder and Steve Chin spent half an hour, and then gave up, setting up NetBeans IDE to use Gradle, because they couldn't find the NetBeans Gradle plugin, during Steve's NightHacking tour. That need happen no more because Attila Kelemen's NetBeans Gradle plugin is now available in the Plugin Manager in NetBeans IDE 7.2: Aside from opening Gradle-based applications, you can now also create new ones: Details and documentation: https://github.com/kelemen/netbeans-gradle-project

    Read the article

  • Pourquoi ne pas laisser les développeurs et le public choisir ? Le PDG d'Adobe répond à la lettre ou

    Mise à jour du 30/04/10 NB : Les commentaires sur cette mise à jour commencent ici dans le topic Pourquoi ne pas laisser les développeurs et le public choisir ? Le PDG d'Adobe répond à la lettre ouverte anti-flash de Steve Jobs C'est en substance la réponse de Shantanu Narayen : si notre technologie est si mauvaise et si « inadaptée à l'iPhone » (comme l'écrit noir sur blanc Steve Jobs - lire ci-avant), les développeurs ne l'utiliseront pas et les consommateurs s'en détourneront.

    Read the article

  • how to report some illegal website to FBI or crime department [closed]

    - by steve
    hello all I asked this question on stackoverflow and they suggested this I dont know whether i can ask this question here or not I am finding many web forums supportingstealing of personal info like creditcards and they selling at 2$ each and thye making gift vouchers of main auction websites and selling them 500$ at 10$ (these guys looping people hardwork ) where can i report these type of sites if i found in internet ? can anyone please tell me exactly where can i report regards steve

    Read the article

  • Using sed, how can I remove lines with salaries ending 500?

    - by Steve
    Using sed, how can I remove lines with salaries ending 500? Input file: Steve Blenheim:238-923-7366:95 Latham Lane, Easton, PA 83755:11/12/56:20300 Betty Boop:245-836-8357:635 Cutesy Lane, Hollywood, CA 91464:6/23/23:14500 Igor Chevsky:385-375-8395:3567 Populus Place, Caldwell, NJ 23875:6/18/68:23400 Norma Corder:397-857-2735:74 Pine Street, Dearborn, MI 23874:3/28/45:245500 Jennifer Cowan:548-834-2348:583 Laurel Ave., Kingsville, TX 83745:10/1/35:58900

    Read the article

  • Using sed, how can I delete all blank lines?

    - by Steve
    Using sed, how can I delete all blank lines? Input file: Steve Blenheim:238-923-7366:95 Latham Lane, Easton, PA 83755:11/12/56:20300 Betty Boop:245-836-8357:635 Cutesy Lane, Hollywood, CA 91464:6/23/23:14500 Igor Chevsky:385-375-8395:3567 Populus Place, Caldwell, NJ 23875:6/18/68:23400 Norma Corder:397-857-2735:74 Pine Street, Dearborn, MI 23874:3/28/45:245500 Jennifer Cowan:548-834-2348:583 Laurel Ave., Kingsville, TX 83745:10/1/35:58900

    Read the article

  • Nameserver Checker

    - by Steve Griff
    Hello, I've got Bind running on a server and although access to the domains I've set up is correct. I was wondering if there was an online (or offline) tool to check if I had setup the service correctly? Regards Steve Griff

    Read the article

  • Can't gain access to old Docs and Settings

    - by Steve
    Hi, I have an old hard disk enclosed in a USB HDD caddy. I want to access G:\Documents and Settings(username). It gives me the error Access is Denied. The username is the same as my current windows username, and so is the password. What do I need to do to access it? Thanks, Steve

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Output asp.net masterpage webform to html email

    - by Steve McCall
    Hi I'm having a bit of a nightmare here! I'mv trying output a webform to html using page.rendercontrol and htmltextwriter but it is resulting in a blank email. Code: StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter htmlTW = new HtmlTextWriter(sw); page.RenderControl(htmlTW); String content = sb.ToString(); MailMessage mail = new MailMessage(); mail.From = new MailAddress("[email protected]"); mail.To.Add("[email protected]"); mail.IsBodyHtml = true; mail.Subject = "Test"; mail.Body = content; SmtpClient smtp = new SmtpClient("1111"); smtp.Send(mail); Response.Write("Message Sent"); I also tried it by rendering a single textbox and got and error saying It needed to be within form tags (which are on the masterpage). I tried this fix: http://forums.asp.net/p/1016960/1368933.aspx#1368933 and added: public override void VerifyRenderingInServerForm(Control control) { return; } But now the errror I get is: VerifyRenderingInServerForm(Control)': no suitable method found to override Does anyone have a fix for this? I'm tearing my hair out!! Thanks, Steve

    Read the article

  • Replication - between pools in the same system

    - by Steve Tunstall
    OK, I fully understand that's it's been a LONG time since I've blogged with any tips or tricks on the ZFSSA, and I'm way behind. Hey, I just wrote TWO BLOGS ON THE SAME DAY!!! Make sure you keep scrolling down to see the next one too, or you may have missed it. To celebrate, for the one or two of you out there who are still reading this, I got something for you. The first TWO people who make any comment below, with your real name and email so I can contact you, will get some cool Oracle SWAG that I have to give away. Don't get excited, it's not an iPad, but it pretty good stuff. Only the first two, so if you already see two below, then settle down. Now, let's talk about Replication and Migration.  I have talked before about Shadow Migration here: https://blogs.oracle.com/7000tips/entry/shadow_migrationShadow Migration lets one take a NFS or CIFS share in one pool on a system and migrate that data over to another pool in the same system. That's handy, but right now it's only for file systems like NFS and CIFS. It will not work for LUNs. LUN shadow migration is a roadmap item, however. So.... What if you have a ZFSSA cluster with multiple pools, and you have a LUN in one pool but later you decide it's best if it was in the other pool? No problem. Replication to the rescue. What's that? Replication is only for replicating data between two different systems? Who told you that? We've been able to replicate to the same system now for a few code updates back. These instructions below will also work just fine if you're setting up replication between two different systems. After replication is complete, you can easily break replication, change the new LUN into a primary LUN and then delete the source LUN. Bam. Step 1- setup a target system. In our case, the target system is ourself, but you still have to set it up like it's far away. Go to Configuration-->Services-->Remote Replication. Click the plus sign and setup the target, which is the ZFSSA you're on now. Step 2. Now you can go to the LUN you want to replicate. Take note which Pool and Project you're in. In my case, I have a LUN in Pool2 called LUNp2 that I wish to replicate to Pool1.  Step 3. In my case, I made a Project called "Luns" and it has LUNp2 inside of it. I am going to replicate the Project, which will automatically replicate all of the LUNs and/or Filesystems inside of it.  Now, you can also replicate from the Share level instead of the Project. That will only replicate the share, and not all the other shares of a project. If someone tells you that if you replicate a share, it always replicates all the other shares also in that Project, don't listen to them.Note below how I can choose not only the Target (which is myself), but I can also choose which Pool to replicate it to. So I choose Pool1.  Step 4. I did not choose a schedule or pick the "Continuous" button, which means my replication will be manual only. I can now push the Manual Replicate button on my Actions list and you will see it start. You will see both a barber pole animation and also an update in the status bar on the top of the screen that a replication event has begun. This also goes into the event log.  Step 5. The status bar will also log an event when it's done. Step 6. If you go back to Configuration-->Services-->Remote Replication, you will see your event. Step 7. Done. To see your new replica, go to the other Pool (Pool1 for me), and click the "Replica" area below the words "Filesystems | LUNs" Here, you will see any replicas that have come in from any of your sources. It's a simple matter from here to break the replication, which will change this to a "Local" LUN, and then delete the original LUN back in Pool2. Ok, that's all for now, but I promise to give out more tricks sometime in November !!! There's very exciting stuff coming down the pipe for the ZFSSA. Both new hardware and new software features that I'm just drooling over. That's all I can say, but contact your local sales SC to get a NDA roadmap talk if you want to hear more.   Happy Halloween,Steve 

    Read the article

  • Package Manager Console For More Than Managing Packages

    - by Steve Michelotti
    Like most developers, I prefer to not have to pick up the mouse if I don’t have to. I use the Executor launcher for almost everything so it’s extremely rare for me to ever click the “Start” button in Windows. I also use shortcuts keys when I can so I don’t have to pick up the mouse. By now most people know that the Package Manager Console that comes with NuGet is PowerShell embedded inside of Visual Studio. It is based on its PowerConsole predecessor which was the first (that I’m aware of) to embed PowerShell inside of Visual Studio and give access to the Visual Studio automation DTE object. It does this through an inherent $dte variable that is automatically available and ready for use. This variable is also available inside of the NuGet Package Manager console. Adding a new class file to a Visual Studio project is one of those mundane tasks that should be easier. First I have to pick up the mouse. Then I have to right-click where I want it file to go and select “Add –> New Item…” or “Add –> Class…”   If you know the Ctrl+Shift+A shortcut, then you can avoid the mouse for adding a new item but you have to manually assign a shortcut for adding a new class. At this point it pops up a dialog just so I can enter the name of the class I want. Since this is one of the most common tasks developers do, I figure there has to be an easier way and a way that avoids picking up the mouse and popping up dialogs. This is where your embedded PowerShell prompt in Visual Studio comes in. The first thing you should do is to assign a keyboard shortcut so that you can get a PowerShell prompt (i.e., the Package Manager console) quickly without ever picking up the mouse. I assign “Ctrl+P, Ctrl+M” because “P + M” stands for “Package Manager” so it is easy to remember:   At this point I can type this command to add a new class: PM> $dte.ItemOperations.AddNewItem("Code\Class", "Foo.cs") which will result in the class being added: At this point I’ve satisfied my original goal of not having to pick up a mouse and not having the “Add New Item” dialog pop up. However, having to remember that $dte method call is not very user-friendly at all. The best thing to do is to make this a re-usable function that always loads when Visual Studio starts up. There is a $profile variable that you can use to figure out where that location is for your machine: PM> $profile C:\Users\steve.michelotti\Documents\WindowsPowerShell\NuGet_profile.ps1 If the NuGet_profile.ps1 file does not already exist, you can just create it yourself and place it in the directory. Now you can put a function inside like this: 1: function addClass($className) 2: { 3: if ($className.EndsWith(".cs") -eq $false) { 4: $className = $className + ".cs" 5: } 6: 7: $dte.ItemOperations.AddNewItem("Code\Class", $className) 8: } Since it’s in the NuGet_profile.ps1 file, this function will automatically always be available for me after starting Visual Studio. Now I can simply do this: PM> addClass Foo At this point, we have a *very* nice developer experience. All I did to add a new class was: “Ctrl-P, Ctrl-M”, then “addClass Foo”. No mouse, no pop up dialogs, no complex commands to remember. In fact, PowerShell gives you auto-completion as well. If I type “addc” followed by [TAB], then intellisense pops up: You can see my custom function appear in intellisense above. Now I can type the next letter “c” and [TAB] to auto-complete the command. And if that’s still too many key strokes for you, then you can create your own PowerShell custom alias for your function like this: PM> Set-Alias addc addClass PM> addc Foo While all this is very useful, I did run into some issues which prompted me to make even further customization. This command will add the new class file to the current active directory. Depending on your context, this may not be what you want. For example, by convention all view model objects go in the “Models” folder in an MVC project. So if the current document is in the Controllers folder, it will add your class to that folder which is not what you want. You want it to always add it to the “Models” folder if you are adding a new model in an MVC project. For this situation, I added a new function called “addModel” which looks like this: 1: function addModel($className) 2: { 3: if ($className.EndsWith(".cs") -eq $false) { 4: $className = $className + ".cs" 5: } 6: 7: $modelsDir = $dte.ActiveSolutionProjects[0].UniqueName.Replace(".csproj", "") + "\Models" 8: $dte.Windows.Item([EnvDTE.Constants]::vsWindowKindSolutionExplorer).Activate() 9: $dte.ActiveWindow.Object.GetItem($modelsDir).Select([EnvDTE.vsUISelectionType]::vsUISelectionTypeSelect) 10: $dte.ItemOperations.AddNewItem("Code\Class", $className) 11: } First I figure out the path to the Models directory on line #7. Then I activate the Solution Explorer window on line #8. Then I make sure the Models directory is selected so that my context is correct when I add the new class and it will be added to the Models directory as desired. These are just a couple of examples for things you can do with the PowerShell prompt that you have available in the Package Manager console. As developers we spend so much time in Visual Studio, why would you not customize it so that you can work in whatever way you want to work?! The next time you’re not happy about the way Visual Studio makes you do a particular task – automate it! The sky is the limit.

    Read the article

  • Firefox Using Prism Website Icon instead of Firefox Icon in Taskbar

    - by Yaakov Ellis
    A while back, I created a website shortcut using the now discontinued Prism extension for Firefox. This worked fine at the time. However, now that Prism has been discontinued, whenever I open Firefox, when the shortcut for FF shows up my Windows taskbar, it does so with the website favico for the old Prism shortcut site that I created. If I right-click on the taskbar shortcut, it shows the old shortcut to the Prism app (even though I deleted its entry manually from C:\Users\USERNAME\AppData\Roaming\WebApps). If I then click on [Pin to Taskbar] and then right click and [Unpin from Taskbar], the icon will switch back to the FF icon. If I righ-click on it a third time, it will switch back to the Prism site favico. I assume that if I were able to install Prism, then it would give me some way to remove the site. However, since Prism is discontinued and incompatible with the current version of FF, that is not an option. What I would like to do: completely remove references to the Prism app from my computer: Disassociate the prism website Favico from Firefox, so that it never replaces the FF icon in the Windows taskbar Shortcut to the Prism site will never shot up in the right-click menu for FF

    Read the article

  • Good Free Ubuntu Server VMWare Image Needed

    - by Yaakov Ellis
    Can anyone recommend a good, free Ubuntu Server VMWare Image (or Virtual Appliance, as they call them)? I have looked on the VMWare VAM and there are literally hundreds to choose from. I am looking for something that can with very minimal effort serve as a development platform for LAMP applications (so it should have all of those installed, plus things like PhpMyAdmin). Bonus points if there is some way to create new Virtual Hosts (for developing and testing new sites) on Apache without having to go digging around conf files and guessing on the sytax.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >