Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 518/1016 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Could not calculate upgrade from Maverick Meerkat to Natty Narwhal

    - by xralf
    I upgraded from Ubuntu Lucid Lynx to Maverick Meerkat with the following commands: sudo apt-get update && sudo apt-get upgrade sudo apt-get install update-manager-core sudo vi /etc/update-manager/release-upgrades and changed the last line to Prompt=normal sudo do-release-upgrade -d This upgrade was OK. I decided to repeat the same steps and to upgrade Maverick Meerkat to Natty Narwhal. It ended with this message: Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: Can not mark 'xubuntu-desktop' for upgrade This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Nov 21 09:37:21 2011) === === Command terminated with exit status 1 (Mon Nov 21 09:37:21 2011) === How can I correct it?

    Read the article

  • Does an ESEUTIL defrag of an Exchange store also perform an integrity check/repair on it?

    - by Bigbio2002
    Earlier this morning, store.exe fuzzled up in one way or another, which necessitated a restart of our Exchange server. It came back online with no errors or problems, all the transaction logs replayed successfully, and all the stores mounted as normal. To me, it was just one of those random crashes; however, our consultant suspects it was caused by corruption in one of the stores. Perhaps he's correct, since he has far more experience than me, but that's not the point. To fix the suspected errors, he's planinng to run an ESEUTIL defrag (via PerfectDisk) to fix them, which he claims will also fix any errors present. From what I understand, defrag, verify, and repair are 3 separate actions, and a defrag does not imply any kind of integrity check. Is this correct? Are there any dangers of running a straight-up defrag on a database that might be corrupt? Edit: Here's the first error in the event log, which indicated the start of the problems we were having. Anyone know what it might indicate? Event Type: Error Event Source: Microsoft Exchange Server Event Category: None Event ID: 1000 Date: 11/23/2011 Time: 8:15:47 AM User: N/A Computer: SERVER Description: Faulting application exsp.dll, version 6.5.7638.1, stamp 430e735b, faulting module kernel32.dll, version 5.2.3790.4480, stamp 49c51f0a, debug? 0, fault address 0x0000bef7. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 41 00 70 00 70 00 6c 00 A.p.p.l. 0008: 69 00 63 00 61 00 74 00 i.c.a.t. 0010: 69 00 6f 00 6e 00 20 00 i.o.n. . 0018: 46 00 61 00 69 00 6c 00 F.a.i.l. 0020: 75 00 72 00 65 00 20 00 u.r.e. . 0028: 20 00 65 00 78 00 73 00 .e.x.s. 0030: 70 00 2e 00 64 00 6c 00 p...d.l. 0038: 6c 00 20 00 36 00 2e 00 l. .6... 0040: 35 00 2e 00 37 00 36 00 5...7.6. 0048: 33 00 38 00 2e 00 31 00 3.8...1. 0050: 20 00 34 00 33 00 30 00 .4.3.0. 0058: 65 00 37 00 33 00 35 00 e.7.3.5. 0060: 62 00 20 00 69 00 6e 00 b. .i.n. 0068: 20 00 6b 00 65 00 72 00 .k.e.r. 0070: 6e 00 65 00 6c 00 33 00 n.e.l.3. 0078: 32 00 2e 00 64 00 6c 00 2...d.l. 0080: 6c 00 20 00 35 00 2e 00 l. .5... 0088: 32 00 2e 00 33 00 37 00 2...3.7. 0090: 39 00 30 00 2e 00 34 00 9.0...4. 0098: 34 00 38 00 30 00 20 00 4.8.0. . 00a0: 34 00 39 00 63 00 35 00 4.9.c.5. 00a8: 31 00 66 00 30 00 61 00 1.f.0.a. 00b0: 20 00 66 00 44 00 65 00 .f.D.e. 00b8: 62 00 75 00 67 00 20 00 b.u.g. . 00c0: 30 00 20 00 61 00 74 00 0. .a.t. 00c8: 20 00 6f 00 66 00 66 00 .o.f.f. 00d0: 73 00 65 00 74 00 20 00 s.e.t. . 00d8: 30 00 30 00 30 00 30 00 0.0.0.0. 00e0: 62 00 65 00 66 00 37 00 b.e.f.7. 00e8: 0d 00 0a 00 ....

    Read the article

  • rewrite rule does not rewrite url as expected

    - by user1708687
    I have a problem with a CMS website, that normally generates readable urls. Sometimes it happens that navigation links are shown as www.domain.com/22, which results in an error, instead of www.domain.com/contact. I have not found a solution for this yet, but the page is working if the url is www.domain.com/index.php?id=22. Therefore, I'm trying to rewrite www.domain.com/22 to www.domain.com/index.php?id=22 and I have used this rewrite rule: RewriteRule ^([1-9][0-9]*)$ index.php?id=$1 [NC] I tested it using http://htaccess.madewithlove.be and here it shows the correct result, but on the website no rewrite is happening.

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Shibboleth: found encrypted assertions, but no CredentialResolver was available

    - by HorusKol
    I've gotten a Shibboleth Server Provider (SP) up and running, and I'm using the TestShib Identity Provider (IdP) for testing. The configuration appears to be all correct, and when I requested my secured directory I was sent to the IdP where I logged in and then was sent back to https://example.org/Shibboleth.sso/SAML2/POST where I am getting a generic error message. Checking the logs, I am told: found encrypted assertions, but no CredentialResolver was available I have rechecked the configuration, and there I have: <CredentialResolver type="File" key="/etc/shibboleth/sp-key.pem" certificate="/etc/shibboleth/sp-cert.pem"/> Both of these files are present at those locations. I've restarted apache and retried, but still get the same error. I don't know if it makes a difference - but only a subdirectory of the site has been secured - the documentroot is publicly available.

    Read the article

  • Unknown CSS font-family oddity with IE7-10 on Win Vista-8

    - by Jeff
    I am seeing the following "oddity" with IE7-10 on Win Vista-8: When declaring font-family: serif; I am seeing an old bitmapped serif font that I can't identify (see screenshot below) instead of the expected font Times New Roman. I know it's an old bitmapped font because it displays aliased, without any font smoothing, with IE7-10 on Win Vista-8 (just like Courier on every version of Win). Screenshot: I would like to know (1) can anyone else confirm my research and (2) BONUS: which font is IE displaying? Notes: IE6 and IE7 on Win XP displays Times New Roman, as they should. It doesn't matter if font-family: serif; is declared in an external stylesheet or inline on the element. Quoting the CSS attribute makes no difference. Adding "Unkown Font" to the stack also makes no difference. New Screenshot: The answer from Jukka below is correct. Here is a new screenshot with Batang (not BatangChe) to illustrate. Hope this helps someone.

    Read the article

  • OAM11gR2: Enabling SSL in the Data Store

    - by Ekta Malik
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Enabling SSL in the Data Store of OAM11gR2 comprises of the below mentioned steps. Import the certificate/s required for establishing the trust with the Store(backend) in the keystore(cacerts) on the machine hosting OAM's Weblogic Admin server Restart the Weblogic Admin server Specify the <Hostname>:<SSL port> in the "Location" field of the Data Store and select the "Enable SSL" checkbox Pre-requisite:- Certificate/s to be imported are available for import Data Store has already been created using OAM admin console and the connection to the store is successful on non-SSL port( though one can always create a Data Store with SSL settings on the first go) Steps for importing the certificate/s:- One can use the keytool utility that comes bundled with JDK to import the certificate. The step for importing the certificate would be same for self-signed and third party certificates (like VeriSign) $JAVA_HOME/bin/keytool -import -v -noprompt -trustcacerts -alias <aliasname> -file <Path to the certificate file> -keystore $JAVA_HOME/jre/lib/security/cacerts Here $JAVA_HOME refers to the path of JDK install directory Note: In case multiple certificates are required for establishing the trust, import all those certificates using the same keytool command mentioned above  One can verify the import of the certificate/s by using the below mentioned command $JAVA_HOME/bin/keytool -list -alias <aliasname>-v -keystore $JAVA_HOME/jre/lib/security/cacerts When the trust gets established for the SSL communication, specifying the SSL specific settings in the Data Store (via OAM admin console) wouldn't result into the previously seen error (when Certificates are yet to be imported) and the "Test Connection" would be successful.

    Read the article

  • Permission denied accessing windows firewall

    - by Simon Sabin
    It doesn't matter who I am logged in as I am getting the following error in the mmc console when I launch the firewall advanced settings There was an error opening the Windows Firewall with Advanced Security snap in You do not have the correct permissions to open the Windows Firewall with Advanced Security console, You must be a member of the Administrators group or the Network Operators group to perform this task. For more information, contact your system administrator. Error code: 0x5. Ive tried processmonitor to identify what permission is being denied but no luck. If I run netsh directly I get access denied as well. This is running windows server 2008 SP2. And yes I was running as an administrator. Any ideas?

    Read the article

  • Team Foundation Server 2012 Build Global List Problems

    - by Bob Hardister
    My experience with the upgrade and use of TFS 2012 has been very positive. I did come across a couple of issues recently that tripped things up for a while. ISSUE 1 The first issue is that 2012 prior to Update 1 published an invalid build list item value to the collection global list. In 2010, the build global list, list item value syntax is an underscore between the build definition and the build number. In the 2012 RTM this underscore was replaced with a backslash, which is invalid.  Specifically, an upload of the global list fails when the backslash is followed at some point by a period. The error when using the API is: <detail ExceptionMessage="TF26204: The account you entered is not recognized. Contact your Team Foundation Server administrator to add your account." BaseExceptionName="Microsoft.TeamFoundation.WorkItemTracking.Server.ValidationException"><details id="600019" http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03"http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03" /></detail> when uploading the global list via the process editor the error is: This issue is corrected in Update1 as the backslash is changed to a forward slash. ISSUE 2 The second issue is that when upgrading from 2010 to 2012, the builds in 2010 are not published to the 2012 global list.  After the upgrade the 2012 global lists doesn’t have any builds and only builds run in 2012 are published to the global list. This was reported to the MSDN forums and Connect. To correct this I wrote a utility to pull all the builds and recreate the builds global list for each project in each collection.  This is a console application with a program.cs, a globallists.cs and a app.config (not published here). The utility connects to TFS 2012, loops through the collections or a target collection as specified in the app.config. Then loops through the projects, the build definitions, and builds.  It creates a global list for each project if that project has at least one build. Then it imports the new list to TFS.  Here’s the code for program and globalists classes. Program.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Server; using System.IO; using System.Xml; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Diagnostics; using Utilities; using System.Configuration; namespace TFSProjectUpdater_CLC { class Program { static void Main(string[] args) { DateTime temp_d = System.DateTime.Now; string logName = temp_d.ToShortDateString(); logName = logName.Replace("/", "_"); logName = logName + "_" + temp_d.TimeOfDay; logName = logName.Replace(":", "."); logName = "TFSGlobalListBuildsUpdater_" + logName + ".log"; Trace.Listeners.Add(new TextWriterTraceListener(Path.Combine(ConfigurationManager.AppSettings["logLocation"], logName))); Trace.AutoFlush = true; Trace.WriteLine("Start:" + DateTime.Now.ToString()); Console.WriteLine("Start:" + DateTime.Now.ToString()); string tfsServer = ConfigurationManager.AppSettings["TargetTFS"].ToString(); GlobalLists gl = new GlobalLists(); //replace this with the URL to your TFS instance. Uri tfsUri = new Uri("https://" + tfsServer + "/tfs"); //bool foundLite = false; TfsConfigurationServer config = new TfsConfigurationServer(tfsUri, new UICredentialsProvider()); config.EnsureAuthenticated(); ITeamProjectCollectionService collectionService = config.GetService<ITeamProjectCollectionService>(); IList<TeamProjectCollection> collections = collectionService.GetCollections().OrderBy(collection => collection.Name.ToString()).ToList(); //target Collection string targetCollection = ConfigurationManager.AppSettings["targetCollection"]; foreach (TeamProjectCollection coll in collections) { if (targetCollection.Equals(string.Empty)) { if (!coll.Name.Equals("TFS Archive") && !coll.Name.Equals("DefaultCol") && !coll.Name.Equals("Team Project Template Gallery")) { doWork(coll, tfsServer); } } else { if (coll.Name.Equals(targetCollection)) { doWork(coll, tfsServer); } } } Trace.WriteLine("Finished:" + DateTime.Now.ToString()); Console.WriteLine("Finished:" + DateTime.Now.ToString()); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("\nHit any key to exit..."); Console.ReadKey(); } Trace.Close(); } static void doWork(TeamProjectCollection coll, string tfsServer) { GlobalLists gl = new GlobalLists(); //target Collection string targetProject = ConfigurationManager.AppSettings["targetProject"]; Trace.WriteLine("Collection: " + coll.Name); Uri u = new Uri("https://" + tfsServer + "/tfs/" + coll.Name.ToString()); TfsTeamProjectCollection c = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(u); ICommonStructureService icss = c.GetService<ICommonStructureService>(); try { Trace.WriteLine("\tChecking Collection Global Lists."); gl.RebuildBuildGlobalLists(c); } catch (Exception ex) { Console.WriteLine("Exception! :" + coll.Name); } } } } GlobalLists.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.Build.Client; using System.Configuration; using System.Xml; using System.Xml.Linq; using System.Diagnostics; namespace Utilities { public class GlobalLists { string GL_NewList = @"<gl:GLOBALLISTS xmlns:gl=""http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists""> <GLOBALLIST> </GLOBALLIST> </gl:GLOBALLISTS>"; public void RebuildBuildGlobalLists(TfsTeamProjectCollection _tfs) { WorkItemStore wis = new WorkItemStore(_tfs); //export the current globals lists file for the collection to save as a backup XmlDocument globalListsFile = wis.ExportGlobalLists(); globalListsFile.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_backupGlobalList.xml"); LogExportCurrentCollectionGlobalListsAsBackup(_tfs); //Build a new global build list from each build definition within each team project IBuildServer buildServer = _tfs.GetService<IBuildServer>(); foreach (Project p in wis.Projects) { XmlDocument newProjectGlobalList = new XmlDocument(); newProjectGlobalList.LoadXml(GL_NewList); LogInstanciateNewProjectBuildGlobalList(_tfs, p); BuildNewProjectBuildGlobalList(_tfs, wis, newProjectGlobalList, buildServer, p); LogEndOfProject(_tfs, p); } } // Private Methods private static void BuildNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, WorkItemStore wis, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p) { //locate the template node XmlNamespaceManager nsmgr = new XmlNamespaceManager(newProjectGlobalList.NameTable); nsmgr.AddNamespace("gl", "http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists"); XmlNode node = newProjectGlobalList.SelectSingleNode("//gl:GLOBALLISTS/GLOBALLIST", nsmgr); LogLocatedGlobalListNode(_tfs, p); //add the name attribute for the project build global list XmlElement buildListNode = (XmlElement)node; buildListNode.SetAttribute("name", "Builds - " + p.Name); LogAddedBuildNodeName(_tfs, p); //add new builds to the team project build global list bool buildsExist = false; if (AddNewBuilds(_tfs, newProjectGlobalList, buildServer, p, node, buildsExist)) { //import the new build global list for each project that has builds newProjectGlobalList.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_" + p.Name + "_" + "newGlobalList.xml"); //write out temp copy of the global list file to be imported LogImportReady(_tfs, p); wis.ImportGlobalLists(newProjectGlobalList.InnerXml); LogImportComplete(_tfs, p); } } private static bool AddNewBuilds(TfsTeamProjectCollection _tfs, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p, XmlNode node, bool buildsExist) { var buildDefinitions = buildServer.QueryBuildDefinitions(p.Name); foreach (var buildDefinition in buildDefinitions) { var builds = buildDefinition.QueryBuilds(); foreach (var build in builds) { //insert the builds into the current build list node in the correct 2012 format buildsExist = true; XmlElement listItem = newProjectGlobalList.CreateElement("LISTITEM"); listItem.SetAttribute("value", buildDefinition.Name + "/" + build.BuildNumber.ToString().Replace(buildDefinition.Name + "_", "")); node.AppendChild(listItem); } } if (buildsExist) LogBuildListCreated(_tfs, p); else LogNoBuildsInProject(_tfs, p); return buildsExist; } // Logging Methods private static void LogExportCurrentCollectionGlobalListsAsBackup(TfsTeamProjectCollection _tfs) { Trace.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); Console.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); } private void LogInstanciateNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tInstanciated the new build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tInstanciated the new build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogLocatedGlobalListNode(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tLocated the build global list node for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tLocated the build global list node for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogAddedBuildNodeName(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded the name attribute to the build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded the name attribute to the build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogBuildListCreated(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded all builds into the " + "Builds - " + p.Name + " list in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded all builds into the " + "Builds - \n\t\t\t" + p.Name + " list in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogNoBuildsInProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tNo builds found for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tNo builds found for project " + p.Name + " \n\t\t\tin the " + _tfs.Name + " collection."); } private void LogEndOfProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tEND OF PROJECT " + p.Name); Trace.WriteLine(" "); Console.WriteLine("\t\tEND OF PROJECT " + p.Name); Console.WriteLine(); } private static void LogImportReady(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tReady to import the build global list for project " + p.Name + " to the " + _tfs.Name + " collection."); Console.WriteLine("\t\tReady to import the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogImportComplete(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tImport of the build global list for project " + p.Name + " to the " + _tfs.Name + " collection completed."); Console.WriteLine("\t\tImport of the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection completed."); } } }

    Read the article

  • Can't assign software to non-admin account

    - by labyrinth
    I'm trying to deploy software on our domain using group policy, but I am only able to do so if the user is a member of a group with administrative privileges. We do not want to allow users to install programs generally, but do want to be able to assign/publish. The test program I'm using is originally a .msi file, and it installs fine for users in the administrators group. How can we assign/publish to normal users without opening up the ability to install whatever? Also, from what I've read, I believe I have correct permissions on the folder/share where the .msi files are stored. This is on Win2008R2 with Win7Pro clients.

    Read the article

  • kernel software trap handling

    - by Tony
    I'm reading a book on Windows Internals and there's something I don't understand: "The kernel handles software interrupts either as part of hardware interrupt handling or synchronously when a thread invokes kernel functions related to the software interrupt." So does this mean that software interrupts or exceptions will only be handled under these conditions: a. When the kernel is executing a function from said thread related to the software exception(trap) b. when it is already handling a hardware trap Is my understanding of this correct? The next bit: "In most cases, the kernel installs front-end trap handling functions that perform general trap handling tasks before and after transferring control to other functions that field the trap." I don't quite understand what it means by 'front-end trap handling functions' and 'field the trap'? Can anyone help me?

    Read the article

  • SQLAuthority News – Presenting at Virtual Tech Days TechEd Pre-Con – February 9, 2011

    - by pinaldave
    I will be presenting on following subject on Virtual Tech Days TechEd Pre-Con – February 9, 2011. Auditing Made Easy: Change Tracking and Change Data Capture Date and Time: February 9, 2011 11:45am-12:45pm Location: Online In this fast paced demo oriented session we will go over few of concept which are related to real life problem at customers. We often see developers and DBA looking for details like who has dropped the table, who has last modified any object as well what was actually modified. SQL Server 2008 has all the answers. It has various new methods for Auditing where not only you can know details about what was changed as well know who changed it as well. In addition to that we can capture way more details configuring Auditing. We can also work prevent changes if proper policy management is configured. If you have ever attended my session on this subject earlier, this is going to absolutely new session and very much demo oriented. There is going to be quiz at the end of the session and I promise that if you attend the session, you will get all the answers correct. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Digest authentication not working: endless cycles of asking for user/pass

    - by bcmcfc
    I'm trying to setup my SVN repository for access remotely. In doing so I have some settings under Apache's dav_svn.conf file. When navigating to hostname/svn, or using Tortoise to do the same it prompts for the user name and password as expected. However, when entering the correct user name and pass that were set in the password file linked to under AuthUserFile it just asks for the credentials again. I think I'm probably missing something simple? The server is running Ubuntu Server 9.10. Accessing SVN remotely does currently work if the authentication lines of dav_svn.conf are commented out. These are the contents of the dav_svn.conf file: <Location /svn> DAV svn SVNPath /home/svn/repo AuthType Digest AuthName "Subversion Repository" AuthDigestDomain /svn/ AuthUserFile /etc/svn_authfile Require valid-user </Location>

    Read the article

  • Can not change to a static IP in Fedora 19

    - by user196272
    Im having a bit of a weird situation. Ive installed Fedora Linux 19 onto a virtual machine with no GUI. initially eth0 does not show up when I perform ifconfig. when I run dmesg | grep eth I see the adapter but it says it changed names to p2p1. Once I perform the ifconfig p2p1 up command it shows up. Now when I try to edit the /etc/sysconfig/network-scripts/ifcfg-p2p1, it does not exist. the only scripts that are there lo and enp0s3. If I try to create the ifcfg-p2p1 file with the correct settings, I can not restart the network service. I tried editing the enp0s3 file, but that did not work. Im fairly new to linux and not sure what else to put in here, so if you need any more information just let me know and Ill put it in here.

    Read the article

  • How to set up secure cookie on weblogic server

    - by adejuanc
    WebLogic Server allows a user to securely access HTTPS resources in a session that was initiated using HTTP, without loss of session data. To enable this feature, add AuthCookieEnabled="true" to the WebServer element in config.xml: <WebServer Name="myserver" AuthCookieEnabled="true"/>Setting AuthCookieEnabled to true, which is the default setting, causes the WebLogic Server instance to send a new secure cookie, _WL_AUTHCOOKIE_JSESSIONID, to the browser when authenticating via an HTTPS connection. Once the secure cookie is set, the session is allowed to access other security-constrained HTTPS resources only if the cookie is sent from the browser.Thus, WebLogic Server uses two cookies: the JSESSIONID cookie and the _WL_AUTHCOOKIE_JSESSIONID cookie. By default, the JSESSIONID cookie is never secure, but the _WL_AUTHCOOKIE_JSESSIONID cookie is always secure. A secure cookie is only sent when an encrypted communication channel is in use. Assuming a standard HTTPS login (HTTPS is an encrypted HTTP connection), your browser gets both cookies.For subsequent HTTP access, you are considered authenticated if you have a valid JSESSIONID cookie, but for HTTPS access, you must have both cookies to be considered authenticated. If you only have the JSESSIONID cookie, you must re-authenticate.To configure on Admin Console : Log into WebLogic Admin Console. Under Domain Structure, press click on <domainname> Select the "Web Applications" tab Select "Lock and Edit" in change center. Click on  "Auth Cookie Enabled" checkbox. Restart to confirm changes. Test an application and view the cookie which got stored as "JSESSIONID" To Configure the Web application's weblogic-application.xml file: Run the following to extract the file from the web application's weblogic-application.xml: $PATH_JDK_HOME\binjar -xvf easy-web-examples.ear META-INF/weblogic-application.xml Add <cookie-secure>true</cookie-secure> between <session-descriptor> </session-descriptor> to the weblogic-application.xml. Run the following to repackage the file to the application: $PATH_JDK_HOME\bin\jar -uvf easy-web-examples.ear META-INF/weblogic-application.xml Deploy the application into WebLogic For further information, please read the documentation on "Using Secure Cookies to Prevent Session Stealing " : http://download.oracle.com/docs/cd/E12840_01/wls/docs103/security/thin_client.html#wp1053780

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • Practical RAID Performance?

    - by wag2639
    I've always thought the following to be a general rule of thumb for RAID: RAID 0: Best performance for READ and WRITE from stripping, greatest risk RAID 1: Redundant, decent for READ (I believe it can read from different parts of a file from different hard drives), not the best for WRITE RAID 0+1 (01): combines redundancy of RAID 1 with performance of RAID 0 RAID 1+0 (10): slightly better version of RAID 0+1 RAID 5: good READ performance, bad WRITE performance, redundant IS THIS ASSUMPTION CORRECT? (and how do they compare to a JBOD setup for R/W IO performance) Are certain practical RAID setups better for different applications: gaming, video editing, database (Acccess or SQL)? I was thinking about hard disk drives but does this apply to solid state drives as well?

    Read the article

  • How to create a Semantic Network like wordnet based on Wikipedia?

    - by Forbidden Overseer
    I am an undergraduate student and I have to create a Semantic Network based on Wikipedia. This Semantic Network would be similar to Wordnet(except for it is based on Wikipedia and is concerned with "streams of text/topics" rather than simple words etc.) and I am thinking of using the Wikipedia XML dumps for the purpose. I guess I need to learn parsing an XML and "some other things" related to NLP and probably Machine Learning, but I am no way sure about anything involved herein after the XML parsing. Is the starting step: XML dump parsing into text a good idea/step? Any alternatives? What would be the steps involved after parsing XML into text to create a functional Semantic Network? What are the things/concepts I should learn in order to do them? I am not directly asking for book recommendations, but if you have read a book/article that teaches any thing related/helpful, please mention them. This may include a refernce to already existing implementations regarding the subject. Please correct me if I was wrong somewhere. Thanks!

    Read the article

  • RightNow CX Cloud Service Combined with Oracle Fusion CRM in the Cloud

    - by Richard Lefebvre
    ·        The May 2012 release of Oracle’s RightNow CX Cloud Service, the customer experience suite, is now integrated with Oracle Fusion CRM, helping organizations to achieve sustainable business growth through relevant, cross-channel customer interactions that can increase revenue opportunities and drive organizational efficiencies. Relevant Interactions Build Stronger Customer Relationships ·          Armed with a comprehensive view of all customer interactions across channels, the context and status of these interactions, and an awareness of the customer’s value to the organization, companies can now offer more relevant products and services to customers. ·         Using the combined Oracle RightNow CX Cloud Service and Oracle Fusion CRM solutions, organizations can increase customer retention, drive higher levels of customer advocacy, and increase sales conversion rates with tools designed to: - Provide a complete, cross-channel view of the customer to sales, marketing and service. - Empower sales and service departments to easily collaborate to proactively solve customer issues, using opportunities to provide purchase advice at the right time and with the right solutions. - Allow sales to easily review service history in preparation for sales calls. - Enable agents to understand customer value based upon prior buying habits and existing opportunities. Deeper Insight Enables Targeted, Personalized Opportunities ·          The combination of Oracle RightNow CX Cloud Service and Oracle Fusion CRM allows sales and marketing organizations to simultaneously leverage service interactions from RightNow CX and sales prediction and segmentation capabilities from Fusion Sales. This helps companies to: - Better match products and services to specific customer needs based on customer service history.  - Deliver targeted, personalized interactions intended to help customers derive more value from purchases and to inform future buying decisions. - Identify new opportunities to increase deal size and conversion rates. Supporting Quotes ·         “Every interaction is a relationship opportunity to grow your business. When these interactions are relevant and add value for customers, customers are more likely to trust the relationship and seek purchase advice,” said David Vap, group vice president, Oracle. “This customer trust provides an opportunity to increase customer product adoption and to reduce the cost of customer acquisition, thereby increasing company profitability.” Supporting Resources ·         Oracle Fusion CRM ·         Oracle Fusion Applications ·         Oracle RightNow CX Cloud Service ·         OracleCRM on Facebook ·         OracleCRM on YouTube

    Read the article

  • DDD Model Design and Repository Persistence Performance Considerations

    - by agarhy
    So I have been reading about DDD for some time and trying to figure out the best approach on several issues. I tend to agree that I should design my model in a persistent agnostic manner. And that repositories should load and persist my models in valid states. But are these approaches realistic practically? I mean its normal for a model to hold a reference to a collection of another type. Persisting that model should mean persist the entire collection. Fine. But do I really need to load the entire collection every time I load the model? Probably not. So I can have specialized repositories. Some that load maybe a subset of the object graph via DTOs and others that load the entire object graph. But when do I use which? If I have DTOs, what's stopping client code from directly calling them and completely bypassing the model? I can have mappers and factories to create my models from DTOs maybe? But depending on the design of my models that might not always work. Or it might not allow my models to be created in a valid state. What's the correct approach here?

    Read the article

  • securing source code with bitlocker

    - by Daniel Powell
    We need to deploy a web based application at a client site where it will be within their local intranet. Part of our requirement is to provide some basic security to protect our IP. I realise that nothings a 100% guaranteed fix but we are just looking to make it a bit harder for most people. The server will be running server 2008 and I was considering using bitlocker as a cheap and nasty way to protect it. From what I understand assuming the mobo supports it we can use the Transparent bitlocker mode and this means that moving the hdd to another pc will mean the hdd will be unreadable in that machine baring some sort of cold boot attack to steal the encryption keys. Is this assumption correct and in the case that the motherboard or any other component fails in the pc and we need to replace it do we lose access to our data or is there a way to unencrypt it (obviously accessible to only our company) EDIT: we do have legal documents that cover this and we will be locking the pc physically and the client will not have access to the pc (windows login) other than via the website we host on it

    Read the article

  • Windows 7 recognises the wrong router.

    - by Henry
    I have a cable router connected to my cable ISP. On the LANs are 4 computers, one of which is a dual boot XP/Win7 machine. I was given an ADSL wireless router which I connected to one of the LAN sockets on my cable router. I don't have an ADSL connection. All the machines connect correctly, some wirelessly, when my dual boot machine is in XP or off. However, when I go into Win 7 on that machine it finds the ADSL router and wants to connect through that (there's no ADSL connection) instead of my cable router and modem. I've turned DHCP off on the ADSL modem and even tried bridging its connections but neither of these have any effect. To get 7 connected, I have to either disconnect the ADSL router, or switch it off. Remember, the SAME computer on the same LAN works perfectly with the same router connected in XP! How then, can I get Win 7 to recognise the correct router?

    Read the article

  • Programming Practice/Test Contest?

    - by Emmanuel
    My situation: I'm on a programming team, and this year, we want to weed out the weak link by making a competition to get the best coder from our group of candidates. Focus is on IEEExtreme-like contests. What I've done: I've been trying already for 2 weeks to get a practice or test site, like UVa or codechef. The plan after I find one: Send them (the candidates) a list of direct links to the problems (making them the "contest's problem list) get them to email me their correct answers' code at the time the judge says they have solved it and accept the fastest one into the team. Issues: We had practiced on UVa already (on programming challenges too), so our former teammate (which will be in the candidate group) already has an advantage if we used it. Codechef has all it's answers public, and since it shows the latest ones it will be extremely hard to verify if the answer was copied. And I've found other sites, like SPOJ, but they share at least some problems with codechef, making them inherit the issue of Codechef So, what alternatives do you think there are? Any site that may work? Any place to get all stuff to set up a Mooshak or similar contest (as in the stuff to get the problems, instructions to set up the server itself are easy to google)? Any other idea?

    Read the article

  • Trouble installing Rabbit VCS for nautilus

    - by Ranhiru Cooray
    I am using Ubuntu 11.10 and following instructions mentioned here to install Rabbit VCS. I added the PPA properly, did a sudo apt-get update and ran sudo apt-get install rabbitvcs-core rabbitvcs-nautilus rabbitvcs-thunar rabbitvcs-gedit rabbitvcs-cli There were dependency issues and I googled a bit and found out that I need to install RabbitVCS only for nautilus as that is the default file manager for Ubuntu. So I ran the install commands separately for rabbitvcs-core, rabbitvcs-gedit and rabbitvcs-cli. Now my understanding is that those are installed properly. However when I run the install command for rabbitvcs-nautilus, I still get a dependency issue. ranhiru@ranhiru-HP-HDX16-NoteBook-PC:~$ sudo apt-get install rabbitvcs-nautilus Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: rabbitvcs-nautilus : Depends: nautilus (< 1:3.0~) but 1:3.2.1-0ubuntu2 is to be installed Depends: python-nautilus (< 1.0~) but 1.0-0ubuntu2 is to be installed E: Unable to correct problems, you have held broken packages. How do I solve this?

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >