Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 368/906 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Multiple CAS servers with Microsoft Exchange and selective authorization

    - by John Wilcox
    I have a Microsoft Exchange 2010 organization within one Microsoft Windows domain and I have users accessing it through OWA. For simplicity lets say I currently have one CAS server (CAS 1) which is accessible only through a VPN connection. Lets call the users connecting to the first CAS group a. For some users though, I need to install another CAS server (CAS 2) so that they can connect without using a VPN connection. Lets call those users group b. What I need to achieve is that group a can only log in to CAS 1 and group b can only log in to CAS 2. Now I know that one can disable/enable OWA per user but in my case that is not enough because OWA must be enabled for both groups.

    Read the article

  • Windows Handling Piped Comands Error Redirection

    - by jpmartins
    Warning: I am no expert on building scripts, and sorry for lousy English. In an case of generating a CSV from a database query I'm using the following commands. ... CALL java.exe -classpath ... com.xigole.util.sql.Jisql -user dmfodbc -pf pwd.file -driver com.sybase.jdbc3.jdbc.SybDriver -cstring %constr% -c ; -input 42.sql -formatter csv -delimiter ; 2%LOGFILE% | CALL grep -v -e "SELECT right" -e "executing: " -e " rows affect" %FicheiroR% 2%LOGFILE% ... I'm using windows implementation of grep. The 2%LOGFILE% in both java and grep command is causing an error message indicating the file is being use by another process. The Ugly workaround i have came up with is to put grep error redirect to a temporary %LOGFILE%.aux java ... | grep ... 2%LOGFILE%.aux type %LOGFILE%.aux % %LOGFILE% del %LOGFILE%.aux What is a better solution?

    Read the article

  • How can I configure a Linksys EA4500 + usb printer for network printing (without connect cloud)

    - by Larry Kyrala
    The documentation and classic firmware (2.0.37) for Cisco's Linksys EA4500 is a bit sparse on setup details. It says I can connect a USB-printer, but then goes on to try to sell "Connect Cloud" remote management software. I don't want that. I just want to know how to set this up with the existing advanced firmware. Is it possible? AFAIK, to setup a IPP or LDP printer, there is usually some kind of queue configuration on the server (i.e. the ea4500 in this case), but I can't find it in the firmware. I also have been unable to find any existing protocols from win7 or mac osx. (windows network share, IPP/LDP etc.) I'm curious if I need to have the "Storage" accounts active and connect to my router either via the local IP or router name. There's a lot of unknowns here; it would help to know how this particular router actually works.

    Read the article

  • Is anyone else using OpenBSD as a router in the enterprise? What hardware are you running it on?

    - by Kamil Kisiel
    We have an OpenBSD router at each of our locations, currently running on generic "homebrew" PC hardware in a 4U server case. Due to reliability concerns and space considerations we're looking at upgrading them to some proper server-grade hardware with support etc. These boxes serve as the routers, gateways, and firewalls at each site. At this point we're quite familiar with OpenBSD and Pf, so hesitant at moving away from the system to something else such as dedicated Cisco hardware. I'm currently thinking of moving the systems to some HP DL-series 1U machines (model yet to be determined). I'm curious to hear if other people use a setup like this in their business, or have migrated to or away from one.

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • Install Adobe AIR on Ubuntu/Linux

    Since quite some time Adobe Technologies released the Linux version of Adobe AIR to bring web applications and widgets to your desktop. Installing new applications on a Linux system is not always as easy as switching the computer on. The following instructions might be helpful to install Adobe AIR on any Linux system. First of all, get the latest installer of Adobe AIR from http://get.adobe.com/air/ - as of writing this article the file name is AdobeAIRInstaller.bin. Save the download in your preferred folder. Now, there are two ways to run the installer - visual style or console style. Visual Installation Launch your favorite or standard file manager like thunar or nautilus and browse to the folder where the AdobeAIRInstaller.bin has been saved. Right click on the file and choose 'Properties' in the context menu Set 'Execute' permissions and confirm modifications with OK Rename file into AdobeAIRInstaller Double click and follow the instructions Using the console Open a terminal like xterm Change into the directory where you stored the download Run this command:[code]chmod +x AdobeAIRInstaller.bin[/code] Now run this command:[code]sudo ./AdobeAIRInstaller.bin[/code] The normal installer will open, install it. From now whenever you download a .air file, just double click it and it will be installed. Troubleshooting In case that the installation does not start properly, try to install via console. This gives you more details about the reasons. Should you run into something like this: [code]AdobeAIRInstaller.bin: 1: Syntax error: "(" unexpected[/code] Double check the execute permission of the installer file and try again.

    Read the article

  • Hiding the Flash Message After a Time Delay

    - by Madhan ayyasamy
    Hi Friends,The flash hash is a great way to provide feedback to your users.Here is a quick tip for hiding the flash message after a period of time if you don’t want to leave it lingering around.First, add this line to the head of your layout to ensure the prototype and script.aculo.us javascript libraries are loaded:Next, add the following to either your layout (recommended), your view templates or a partial depending on your needs. I usually add this to a partial and include the partial in my layouts. "flash", :id = flash_type % "text/javascript" do % setTimeout("new Effect.Fade('');", 10000); This will wrap the flash message in a div with class=‘flash’ and id=‘error’, ‘notice’ or ‘warn’ depending on the flash key specified.The value ‘10000’ is the time in milliseconds before the flash will disappear. In this case, 10 seconds.This function looks pretty good and little javascript stunts like this can help make your site feel more professional. It’s also worth bearing in mind though, not everybody can see well or read as quickly as others so this may not be suitable for every application.Update:As Mitchell has pointed out (see comments below), it may be better to set the flash_type as the div class rather than it’s id. If there is the possibility that you’ll be showing more than one flash message per page, setting the flash_type as the div id will result in your HTML/XHTML code becoming invalid because the unique intentifier will be used more than once per page.Here is a slightly more complex version of the method shown above that will hide all divs with class ‘flash’ after a time delay, achieving the same effect and also ensuring your code stays valid with more than one flash message! "flash #{flash_type}" % "text/javascript" do % setTimeout("$$('div.flash').each(function(flash){ flash.hide();})", 10000); In this example, the div id is not set at all. Instead, each flash div will have class “div” and also class of the type of flash message (“error”, “warning” etc.).Have a Great Day..:)

    Read the article

  • Remote deploying wars to a liferay installation

    - by iftrue
    With vanilla tomcat, you can POST to URLs beneath SOMURL/manager/ with a proper manager user role defined. The liferay deployment of tomcat, however, is missing the manager and host-manager applications, and when I copy the directories from a vanilla Tomcat installation, I get the exception below: Exception: javax.servlet.ServletException: Error allocating a servlet instance org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) root cause java.lang.SecurityException: Servlet of class org.apache.catalina.manager.HTMLManagerServlet is privileged and cannot be loaded by this web application org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) What's the proper way to remote deploy wars to a liferay instance? (Not portlets, in my case.)

    Read the article

  • pam_exec.so PAM module does not export variable PAM_USER as stated in the documentation

    - by davidparks21
    I'm trying to use the pam_exec.so PAM module to execute a script which needs to know the username/password coming from the application (OpenVPN in this case). I have a script that executes printenv >>afile, but I don't see all the environment variables that the man pages states that pam_exec.so exports (namely PAM_USER I think), I only see the following: PAM_SERVICE=openvpn PAM_TYPE=auth PWD=/usr/local/openvpn/bin SHLVL=1 A__z="*SHLVL I do successfully pick up the password off of STDIN and output it with this same script. But for the life of me I can't get the username. Any thoughts on what I should try next?

    Read the article

  • How to find the Fastest DNS servers to host our domain?

    - by Denis Volovik
    The question was born because lately we've seen a pretty odd (well, at least for us, for the first time) - error message in Google webmaster tools - "DNS lookup timeout" ... I was pretty sure that with eNom's 5 DNS servers (dns1... to dns5.name-services.com) we're pretty set... But it appears that from (Europe/Hungary), for example - dns1.name-services.com takes 170ms. to respond on a ping... while GoDaddy's ns75.domaincontrol.com - takes only 40 ms. to respond... and at the same time - dns2 to dns5.name-services.com - each result with a timeout error (on ping)... This issue came to our attention right in the final stages of optimizing our web-site (almost to death) - basically, just in time... I would love to move our domains to a fast (fastest?) and reliable DNS server.. - but how do I find one ? Also - I did the ping tests from various geographic locations (we have servers in many countries) and GoDaddy seemed to be faster than eNom almost in every case. I'd be very thankful for any hints on this! Edited: Well.. maybe this one does not have an answer, after all...

    Read the article

  • Combo/Input LOV displaying non-reference key value

    - by [email protected]
    Its a very common use-case of LOV that we want to diplay a non key value in the LOV but store the key value in the DB. I had to do the same in a sample application I was building. During implementation of this, I realized that there are multiple ways to achieve this.I am going to describe each of these below.Example : Lets take an example of our classic HR schema. I have 2 tables Employee and Department where Dno is the foreign key attribute in Employee that references Department table.I want to create a LOV for Deparment such that the List always displays Dname instead of Dno. However when I update it, it it should update the reference key Dno.To achieve this I had 3 alternative1) Approach 1 :Create a composite VO and add the attributes from Department into Employee using a join.Refer the blog http://andrejusb.blogspot.com/2009/11/defining-lov-on-reference-attribute-in.htmlPositives :1. Easy to implement and use.2. We can use this attribute directly in queries defined on new attribute i.e If i have to display this inside query panel.Negative : We have to create an additional Join on the VO.Ex:SELECT Employees.EMPLOYEE_ID,        Employees.FIRST_NAME,        Employees.LAST_NAME,        Employees.EMAIL,        Employees.PHONE_NUMBER,       Department.Dno,        Department.DnameFROM EMPLOYEES Employees, Department Department WHERE Employees.Dno = Department .Dno2) Approach 2 :

    Read the article

  • sysprep failure on Windows Server 2008

    - by dushyantp
    Before deploying a Azure VM Role, we need to perform %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown But in my case the sysprep fails with the log file %windir%\system32\sysprep\Panther\setuperr.txt saying: 2012-07-05 08:03:57, Error [0x0f0073] SYSPRP RunExternalDlls:Not running DLLs; either the machine is in an invalid state or we couldn't update the recorded state, dwRet = 31 2012-07-05 08:03:57, Error [0x0f00ae] SYSPRP WinMain:Hit failure while processing sysprep cleanup external providers; hr = 0x8007001f I do not always want to create a new image. Is there any work around? I followed the instructions in MS support here and tried: %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:.\unattend.xml It did not work. Under certain circumstances, I need to tear down the VM Image from azure and re-deploy with some more changes. So sysprep has to run almost twice every week.

    Read the article

  • Change Gnome panel profile according to number of displays

    - by ifischer
    I'm running Ubuntu 10.04 on a Laptop. I have a startup script which enables external displays, if they are connected. It runs at GDM startup, configured in /etc/gdm/Init/Default. When i'm running without external displays, Gnome should use 2 Panels. When i'm using 2 external displays, Gnome should add an additional Panel to the second display. But this should of course be removed again if i detach the external displays (and restart). Can i configure this use case by using Gnome Panel-profiles? I read that there is a startup option "--profile" for gnome-panel, but i don't know how and where i could switch the profile, especially because this has to be done after recognizing the number of displays. Or can i add a general Gnome profile and switch between those profiles somehow to achieve this behavior?

    Read the article

  • Mark as read on delete in Outlook?

    - by x3ja
    I'd like Outlook to mark any messages I delete to be marked as read. For bonus points, I'd like it to only do this on messages that I have opened/previewed before pressing delete since this means I've looked at the content and chosen to delete it. I know I can set it to mark as read after x seconds when I'm looking at it, that's not what I want. I also know that I can move off the message & back on to it or right click to mark as read - still not what I want. I'm using Outlook 2007 in case that matters. [Edit: I just found I can at least mark as read with a keyboard shortcut: Ctrl-Q, but again, it'd be nice to not have to do this. More shortcuts here.]

    Read the article

  • How to display programs, started by TSWA Remoteapp, inside a browser instead of directly on the desk

    - by richardboon
    For those not familiar with Terminal Services Web Access and Resulting Internet Communication in Windows Server 2008, here’s a brief overview: technet.microsoft.com/en-us/library/cc754502(WS.10).aspx The problem (I am trying to solve), can be seen in the picture of step 16, where the application is display directly right on the desktop [see link below]: http://blogs.technet.com/askcore/archive/2008/07/22/publishing-the-hyper-v-management-interface-using-terminal-services.aspx I am in the process of setting up Terminal Service Web Access RemoteApp for our company. Users only want remoteapp and needs to see the remote program running within/contain-inside the browser. They don’t want to see or access the whole desktop [as the case with remote desktop, which can be displayed inside a browser].

    Read the article

  • Backup Exec 12.5 or 2010? [closed]

    - by Chris Thorpe
    Backup Exec 2010 has just dropped, and I'm about to implement a new BEWS infrastructure, complete with CALs and new central servers. When I specced this up last year, I ignored 2010 and focused on Backup Exec 12.5, since it's a mature product. In previous experience, major released of BE had numerous technical issues and seemed to improve significantly at the first service pack. However, our refresh cycle on the backup infrastructure is slow, the main driver usually being lack of support for some new server type (in this case, ESX has driven our current upgrade need). With this in mind, I'm wondering if Backup Exec 2010 should be my first choice, as it'll last longer under current support than 12.5, which will approach EOL soon. Has anyone got any perspective they could add to this? Right now, I'm leaning towards biting the bullet and going with 2010.

    Read the article

  • Lightest weight ubuntu desktop for text editing in big terminal windows?

    - by Kevin Pauli
    I have an older windows laptop onto which I'm installing ubuntu within a VM. My goal is just to use terminal-based linux tools such as vim and shell scripting. I don't give a hoot about any gui for this box. So I first installed ubuntu minimalcd and chose "Basic Ubuntu Server". Upon boot, the text-based terminal came up and I logged in, but the problem is it only gives me 80 columns. I want to do terminal mode vim but have a couple hundred columns to take advantage of my large monitor. If you happen to know how to do that, please see my question here . This post is assuming that the other question is not answerable, and that I will need a desktop to get more than 80 columns in a terminal window. So if that is the case, I want the lightest weight one possible, because this is older hardware and all I want is the ability to have nice big text-based terminal windows for editing text. From the ubuntu minimal CD, I see options for Edubuntu, Kubuntu, etc. Which one of the available desktops would be a good choice for my needs?

    Read the article

  • How to change aging AD password while connected over VPN from Mac

    - by Franek Kuciapa
    I am connecting to the office from mac via VPN, Cisco AnyConnect Secure Mobility Client. I do not know what to do when my AD password on the firm side will age and approach expiration to ensure that my Mac and VPN continue to work afterwards. Is the proper thing to do in this case to connect via VPN and then change the password on Mac via System Preferences, Users & Groups? Will this update the AD on the server side? Will it sync the PointSec as well that is running on the Mac? Or is a better procedure to RDP to a Windows box while connected over VPN and change the password there hoping the Mac will somehow sync up ?? Running Mountain Lion on the Mac.

    Read the article

  • using sed to print next line after match

    - by smokinguns
    I came across various examples on printing next line after a match, that use awk and sed. I'm using it in my code and it works fine. Now I want to use a variable instead of a hardcoded value for the pattern match. The search pattern string includes forward slashes "/". How do I use a variable that has "/" in it and use it to print the next line after the match? The following doesn't seem to work: var="/somePath/to/my/home" val=`echo -e "$someStr" | sed -n ':$var:{n;p;}'` In this case, val is always blank. I'm using using ":" as the delimiter instead of "/". I'm on a Mac OS X.

    Read the article

  • Building computer: Casing for peripheral sockets is a pain!

    - by burnt1ce
    I have a casing the covers the spacing between the peripheral socket which is standard to have when you buy a motherboard. My problem with these covers is that they have spokes that pushes the motherboard away so the sockets don't even come out of the covers. This also misaligns the screws on my atx motherboard with the holes in my computer case. I usually break these spokes so that i can align my motherboard correctly. Why the heck do motherboard manufacture put spokes on these covers? am i using them wrong? UPDATE Here's an image that i found that shows the plastic casing. http://z.about.com/d/pcsupport/1/5/m/-/-/-/tour%5Fexternal.jpg You can even see the indentation that makes the spokes that pushes the motherboard inwards.

    Read the article

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • What are the consequences of immutable classes with references to mutable classes?

    - by glenviewjeff
    I've recently begun adopting the best practice of designing my classes to be immutable per Effective Java [Bloch2008]. I have a series of interrelated questions about degrees of mutability and their consequences. I have run into situations where a (Java) class I implemented is only "internally immutable" because it uses references to other mutable classes. In this case, the class under development appears from the external environment to have state. Do any of the benefits (see below) of immutable classes hold true even by only "internally immutable" classes? Is there an accepted term for the aforementioned "internal mutability"? Wikipedia's immutable object page uses the unsourced term "deep immutability" to describe an object whose references are also immutable. Is the distinction between mutability and side-effect-ness/state important? Josh Bloch lists the following benefits of immutable classes: are simple to construct, test, and use are automatically thread-safe and have no synchronization issues do not need a copy constructor do not need an implementation of clone allow hashCode to use lazy initialization, and to cache its return value do not need to be copied defensively when used as a field make good Map keys and Set elements (these objects must not change state while in the collection) have their class invariant established once upon construction, and it never needs to be checked again always have "failure atomicity" (a term used by Joshua Bloch) : if an immutable object throws an exception, it's never left in an undesirable or indeterminate state

    Read the article

  • nano syntax highlighting not working for all languages

    - by Dejan
    I have a funny situation where I am unable to add custom highlighting definitions to my nano text editor. The funny thing is that the predefined work like a charm and can be edited. But I have created a new one for js with $ sudo touch js.nanorc $ sudo nano js.nanorc my current js.nanorc looks like this: syntax "JavaScript" "\.js$" color blue "\<[-+]?([1-9][0-9]*|0[0-7]*|0x[0-9a-fA-F]+)([uU][lL]?|[lL][uU]?)?\>" color blue "\<[-+]?([0-9]+\.[0-9]*|[0-9]*\.[0-9]+)([EePp][+-]?[0-9]+)?[fFlL]?" color blue "\<[-+]?([0-9]+[EePp][+-]?[0-9]+)[fFlL]?" color brightblue "[A-Za-z_][A-Za-z0-9_]*[[:space:]]*[(]" color black "[(]" color cyan "\<(break|case|catch|continue|default|delete|do|else|finally)\>" color cyan "\<(for|function|get|if|in|instanceof|new|return|set|switch)\>" color cyan "\<(switch|this|throw|try|typeof|var|void|while|with)\>" color cyan "\<(null|undefined|NaN)\>" color brightcyan "\<(true|false)\>" color green "\<(Array|Boolean|Date|Enumerator|Error|Function|Math)\>" color green "\<(Number|Object|RegExp|String)\>" color red "[-+/*=<>!~%?:&|]" color magenta "/[^*]([^/]|(\\/))*[^\\]/[gim]*" color yellow ""(\\.|[^"])*"|'(\\.|[^'])*'" color magenta "\\[0-7][0-7]?[0-7]?|\\x[0-9a-fA-F]+|\\[bfnrt'"\?\\]" color brightblack "(^|[[:space:]])//.*" color brightblack start="/\*" end="\*/" color brightwhite,cyan "TODO:?" color ,green "[[:space:]]+$" color ,red " +" If anyone can see the problem then please tel me

    Read the article

  • BPM PS6 video showing process lifecycle in more detail (30min) by Mark Nelson

    - by JuergenKress
    If the five minute video I shared last week has whet your appetite for more, then this might be just what you are looking for! The same international team that has made that video - Andrew Dorman, Tanya Williams, Carlos Casares, Joakim Suarez and James Calise – have also created a thirty minute version that walks through in much more detail and shows you, from the perspective of various business stakeholders involved in process modeling, exactly how BPM PS6 supports the end to end process lifecycle. The video centres around a Retail Leasing use case, and follows how Joakim the Business Analyst, Pablo the Process Owner, and James the Process Analyst take the process from conception to runtime, solely through BPM Composer, without the need for IT or the use of JDeveloper. Joakim, the Business Analyst, models the process, designs the user interaction forms, and creates business rules, Pablo, the Process Owner, reviews the process documentation and tests the process using the new ‘Process Player’, James, the Process Analyst, analyses the process and identifies potential bottle necks using ‘Process Simulation’. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM PS6,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to prevent 2D camera rotation if it would violate the bounds of the camera?

    - by Andrew Price
    I'm working on a Camera class and I have a rectangle field named Bounds that determines the bounds of the camera. I have it working for zooming and moving the camera so that the camera cannot exit its bounds. However, I'm a bit confused on how to do the same for rotation. Currently I allow rotating of the camera's Z-axis. However, if sufficiently zoomed out, upon rotating the camera, areas of the screen outside the camera's bounds can be shown. I'd like to deny the rotation assuming it meant that the newly rotated camera would expose areas outside the camera's bounds, but I'm not quite sure how. I'm still new to Matrix and Vector math and I'm not quite sure how to model if the newly rotated camera sees outside of its bounds, undo the rotation. Here's an image showing the problem: http://i.stack.imgur.com/NqprC.png The red is out of bounds and as a result, the camera should never be allowed to rotate itself like this. This seems like it would be a problem with all rotated values, but this is not the case when the camera is zoomed in enough. Here are the current member variables for the Camera class: private Vector2 _position = Vector2.Zero; private Vector2 _origin = Vector2.Zero; private Rectangle? _bounds = Rectangle.Empty; private float _rotation = 0.0f; private float _zoom = 1.0f; Is this possible to do? If so, could someone give me some guidance on how to accomplish this? Thanks. EDIT: I forgot to mention I am using a transformation matrix style camera that I input in to SpriteBatch.Begin. I am using the same transformation matrix from this tutorial.

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >