Search Results

Search found 52450 results on 2098 pages for 'disk operating system'.

Page 353/2098 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Explained: EF 6 and “Could not determine storage version; a valid storage connection or a version hint is required.”

    - by Ken Cox [MVP]
    I have a legacy ASP.NET 3.5 web site that I’ve upgraded to a .NET 4 web application. At the same time, I upgraded to Entity Framework 6. Suddenly one of the pages returned the following error: [ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.]    System.Data.SqlClient.SqlVersionUtils.GetSqlVersion(String versionHint) +11372412    System.Data.SqlClient.SqlProviderServices.GetDbProviderManifest(String versionHint) +91    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +92 [ProviderIncompatibleException: The provider did not return a ProviderManifest instance.]    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +11431433    System.Data.Metadata.Edm.Loader.InitializeProviderManifest(Action`3 addError) +11370982    System.Data.EntityModel.SchemaObjectModel.Schema.HandleAttribute(XmlReader reader) +216 A search of the error message didn’t turn up anything helpful except that someone mentioned that the error messages was bogus in his case. The page in question uses the ASP.NET EntityDataSource control, consumed by a Telerik RadGrid. This is a fabulous combination for putting a huge amount of functionality on a page in a very short time. Unfortunately, the 6.0.1 release of EF6 doesn’t support EntityDataSource. According to the people in charge, support is planned but there’s no timeline for an EntityDataSource build that works with EF6.  I’m not sure what to do in the meantime. Should I back out EF6 or manually wire up the RadGrid? The upshot is that you might want to rethink plans to upgrade to Entity Framework 6 for Web forms projects if they rely on that handy control. It might also help to spend a User voice vote here:  http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3702890-support-for-asp-net-entitydatasource-and-dynamicda

    Read the article

  • SQL SERVER – Generate Report for Index Physical Statistics – SSMS

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Backup and Restore Events All Transactions All Blocking Transactions Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Resource Locking Statistics by Objects Object Execute Statistics Database Consistency history Index Usage Statistics Index Physical Statistics Schema Change history User Statistics Select the Reports with name Index Physical Statistics. Once click, a report containing all the index names along with other information related to index will be visible, e.g. Index Type and number of partitions. One column that caught my interest was Operation Recommended. In some place, it suggested that index needs to be rebuilt. It is also possible to click and expand the column of partitions and see additional details about index as well. DBA and Developers who just want to have idea about how your index is and its physical statistics can use this tool. Click to Enlarge Note: Please note that I will rebuild my indexes just because this report is recommending it. There are many other parameters you need to consider before rebuilding indexes. However, this tool gives you the accurate stats of your index and it can be right away exported to Excel or PDF writing by clicking on the report. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Upgrade Your Existing BI Publisher 11g (11.1.1.3) to 11.1.1.5

    - by Kan Nishida
    It’s already more than a month now since BI Publisher 11.1.1.5 was released at beginning of May. Have you already tried out many of the great new features? If you are already running on the first version of BI Publisher 11g (11.1.1.3) you might wonder how to upgrade the existing BI Publisher to the 11.1.1.5 version. There are two ways to do this, one is ‘Out-Place’ and another is ‘In-Place’. The ‘Out-Place’ would be quite simple. Basically you will need to install the whole BI or just BI Publisher standalone R11.1.1.5 at a different location then you can switch the catalog to the existing one so that all the reports will be there in the new 11.1.1.5 environment. But sometimes things are not that simple, you might have some custom applications or configuration on the original environment and you want to keep all of them with the upgraded environment. For such scenarios, there is the ‘In-Place’ upgrade, which overrides on top of the original environment only the parts relevant for BI and BI Publisher, and that’s what I’m going to talk about today. Here is the basic steps of the ‘In-Place’ upgrade. Upgrade WebLogic Server to 10.3.5 Upgrade BI System to 11.1.1.5 Upgrade Database Schema Re-register BI Components Upgrade FMW (Fusion Middleware) Configuration Upgrade BI Catalog There is a section that talks about this upgrade from 11.1.1.3 to 11.1.1.5 as part of the overall upgrade document. But I hope my blog post summarized it and made it simple for you to cover only what’s necessary. Upgrade Document: http://download.oracle.com/docs/cd/E21764_01/bi.1111/e16452/bi_plan.htm#BABECJJH Before You Start Stop BI System and Backup I can’t emphasize enough, but before you start PLEASE make sure you take a backup of the existing environments first. You want to stop all WebLogic Servers, Node Manager, OPMN, and OPMN-managed system components that are part of your Oracle BI domains. If you’re on Windows you can do this by simply selecting ‘Stop BI Services’ menu. Then backup the whole system. Upgrade WebLogic Server to 10.3.5 Download WebLogic Server 10.3.5 Upgrade Installer With BI 11.1.1.3 installation your WebLogic Server (WLS) is 10.3.3 and you need to upgrade this to 10.3.5 before upgrading the BI part. In order to upgrade you will need this 10.3.5 upgrade version of WLS, which you can download from our support web site (https://support.oracle.com) You can find the detail information about the installation and the patch numbers for the WLS upgrade installer on this document. Just for your short cut, if you are running on Windows or Linux (x86) here is the patch number for your platform. Windows 32 bit: 12395517: Linux: 12395517 Upgrade WebLogic Server 1. After unzip the downloaded file, launch wls1035_upgrade_win32.exe if you’re on Windows. 2. Accept all the default values and keep ‘Next’ till end, and start the upgrade. Once the upgrade process completes you’ll see the following window. Now let’s move to the BI upgrade. Upgrade BI Platform to 11.1.1.5 with Software Only Install Download BI 11.1.1.5 You can download the 11.1.1.5 version from our OTN page for your evaluation or development. For the production use it’s recommended to download from eDelivery. 1. Launch the installer by double click ‘setup.exe’ (for Windows) 2. Select ‘Software Only Install’ option 3. Select your original Oracle Home where you installed BI 11.1.1.3. 4. Click ‘Install’ button to start the installation. And now the software part of the BI has been upgraded to 11.1.1.5. Now let’s move to the database schema upgrade. Upgrade Database Schema with Patch Assistant You need to upgrade the BIPLATFORM and MDS Schemas. You can use the Patch Assistant utility to do this, and here is an example assuming you’ve created the schema with ‘DEV’ prefix, otherwise change it with yours accordingly. Upgrade BIPLATFORM schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_BIPLATFORM Upgrade MDS schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_MDS Re-register BI System components Now you need to re-register your BI system components such as BI Server, BI Presentation Server, etc to the Fusion Middleware system. You can do this by running ‘upgradenonj2eeapp.bat (or .sh)’ command, which can be found at %ORACLE_HOME%/opmn/bin. Before you run, you need to start the WLS Server and make sure your WLS environment is not locked. If it’s locked then you need to release the system from the Fusion Middleware console before you run the following command. Here is the syntax for the ‘upgradenonj2eeapp.bat (or .sh) command.  upgradenonj2eeapp.bat    -oracleInstance Instance_Home_Location    -adminHost WebLogic_Server_Host_Name    -adminPort administration_server_port_number    -adminUsername administration_server_user And here is an example: cd %BI_HOME%\opmn\bin upgradenonj2eeapp.bat -oracleInstance C:\biee11\instances\instance1 -adminHost localhost -adminPort 7001 -adminUsername weblogic Upgrade Fusion Middleware Configuration There are a couple things on the Fusion Middleware need to be upgraded for the BI system to work. Here is a list of the components to upgrade. Upgrade Shared Library (JRF) Upgrade Fusion Middleware Security (OPSS) Upgrade Code Grants Upgrade OWSM Policy Repository Before moving forward, you need to stop the WebLogic Server. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd And, let’s start with ‘Upgrade Shared Library (JRF)’. Upgrade Shared Library (JRF) You can use updateJRF() WLST command to upgrade the shared libraries in your domain. Before you do this, you need to stop all running instances, Managed Servers, Administration Server, and Node Manager in the domain. Here is an example of the ‘upgradeJRF()’ command: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeJRF('C:/biee11/user_projects/domains/bifoundation_domain') Upgrade Fusion Middleware Security (OPSS) This step is to upgrade the Fusion Middleware security piece. You can use ‘upgradeOpss()’ WLST command. Here is a syntax for the command. upgradeOpss(jpsConfig="existing_jps_config_file", jaznData="system_jazn_data_file") The ‘existing jps-config.xml file can be found under %DOMAIN_HOME%/config/fmwconfig/jps-config.xml and the ‘system_jazn_data_file’ can be found under %MW_HOME%/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml. And here is an example: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeOpss(jpsConfig="c:/biee11/user_projects/domains/bifoundation_domain/config/fmwconfig/jps-config.xml", jaznData="c:/biee11/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml") exit() Upgrade Code Grants for Oracle BI Domain And this is the last step for the Fusion Middleware platform upgrade task. You need to run this python script ‘bi-upgrade.py‘ script to configure the code grants necessary to ensure that SSL works correctly for Oracle BI. However, even if you don’t use SSL, you still need to run this script. And if you have multiple BI domains (Enterprise deployment) then you need to run this on each domain. Here is an example: cd %MW_HOME%\oracle_common\common\bin wlst c:\biee11\Oracle_BI1\bin\bi-upgrade.py --bioraclehome c:\biee11\Oracle_BI1 --domainhome c:\biee11\user_projects\domains\bifoundation_domain Upgrade OWSM Policy Repository This is to upgrade OWSM (Oracle Web Service Manager) policy repository, you can use WLST command ‘upgradeWSMPolicyRepository()’. In order to run this command you need to have your WebLogic Server up-and-running. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd cd %MW_HOME%\oracle_common\common\bin wlst.cmd connect ('weblogic','welcome1','t3://localhost:7001') upgradeWSMPolicyRepository() exit() Upgrade BI Catalogs This step is required only when you have your BI Publisher integrated with BIEE. If your BI Publisher is deployed as a standalone then you don’t need to follow this step. Now finally, you can upgrade the BI catalog. This won’t upgrade your BI Publisher reports themselves, but it just upgrades some attributes information inside the catalog. Before you do this upgrade, make sure the BI system components are not running. You can check the status by the command below. opmnctl status You can do the upgrade by updating a configuration file ‘instanceconfig.xml’, which can be found at %BI_HOME%\instances\instance1\config\coreapplication_obips1, and change the value of ‘UpgradeAndExit’ to be ‘true’. Here is an example: <ps:Catalog xmlns:ps="oracle.bi.presentation.services/config/v1.1"> <ps:UpgradeAndExit>true</ps:UpgradeAndExit> </ps:Catalog> After you made the change and save the file, you need to start the BI Presentation Server. This time you want to start only the BI Presentation Server instead of starting all the servers. You can use ‘opmnctl’ to do so, and here is an example. cd %ORACLE_INSTANCE%\bin opmnctl startproc ias-component=coreapplication_obips1 This would upgrade your BI Catalog to be 11.1.1.5. After the catalog is updated, you can stop the BI Presentation Server so that you can modify the instanceconfig.xml file again to revert the upgradeAndExit value back to ‘false’. Start Explore BI Publisher 11.1.1.5 After all the above steps, you can start all the BI Services, access to the same URL, now you have your BI Publisher and/or BI 11.1.1.5 in your hands. Have fun exploring all the new features of R11.1.1.5!

    Read the article

  • Windows Embedded CE DiskPrep PowerToy

    - by Bruce Eitman
    Kurt Kennett from Microsoft has offered up a handy new tool. This tool is useful for creating a boot disk for x86 Windows CE systems that use the BIOSLoader. This is especially useful for creating disks with formats other than FAT16, which are fairly easy to create. I am letting you know about this new tool, but I have to be honest in telling you that I haven’t used it myself because it is for Windows Vista and Windows 7 – which I don’t have available right now. But you can bet that I will be setting up a virtual machine to give it a try. Here is what Kurt has to say about the tool: Download the DiskPrep Tool  DiskPrep can prepare any hard disk that can be attached to your development PC so that it can boot your X86-based Windows CE OS Design. USB disks (including disk "keys"), Compact Flash Cards, and SD cards all work if your system bios can boot from that type of media. FAT16, FAT32, and exFAT all supported. DOS is not used - the program prepares the bios loader on the disk and only uses that. If you use Windows Vista or Windows 7, you can use VHD files for VirtualPC. This allows for rapid prototyping of a self-booting system. Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • A Gentle .NET touch to Unix Touch

    - by lavanyadeepak
    A Gentle .NET touch to Unix Touch The Unix world has an elegant utility called 'touch' which would modify the timestamp of the file whose path is being passed an argument to  it. Unfortunately, we don't have a quick and direct such tool in Windows domain. However, just a few lines of code in C# can fill this gap to embrace and rejuvenate any file in the file system, subject to access ACL restrictions with the current timestamp.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace LavanyaDeepak.Utilities { class Touch { static void Main(string[] args) { if (args.Length < 1) { Console.WriteLine("Please specify the path of the file to operate upon."); return; } if (!File.Exists(args[0])) { try { FileAttributes objFileAttributes = File.GetAttributes(args[0]); if ((objFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) { Console.WriteLine("The input was not a regular file."); return; } } catch { } Console.WriteLine("The file does not seem to be exist."); return; } try { File.SetLastWriteTime(args[0], DateTime.Now); Console.WriteLine("The touch completed successfully"); } catch (System.UnauthorizedAccessException exUnauthException) { Console.WriteLine("Unable to touch file. Access is denied. The security manager responded: " + exUnauthException.Message); } catch (IOException exFileAccessException) { Console.WriteLine("Unable to touch file. The IO interface failed to complete request and responded: " + exFileAccessException.Message); } catch (Exception exGenericException) { Console.WriteLine("Unable to touch file. An internal error occured. The details are: " + exGenericException.Message); } } } }

    Read the article

  • OAF Page to Upload Files into Server from local Machine

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarFileUploadDemo   Automatically a new OA Project will also be created   Project Name -- FileUploadDemo Default Package -- prajkumar.oracle.apps.fnd.fileuploaddemo   2. Create a New Application Module (AM) Right Click on FileUploadDemo > New > ADF Business Components > Application Module Name -- FileUploadAM Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.server Check Application Module Class: FileUploadAMImpl Generate JavaFile(s)   3. Create a New Page Right click on FileUploadDemo > New > Web Tier > OA Components > Page Name -- FileUploadPG Package -- prajkumar.oracle.apps.fnd.fileuploaddemo.webui   4. Select the FileUploadPG and go to the strcuture pane where a default region has been created   5. Select region1 and set the following properties --     Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.fileuploaddemo.server.FileUploadAM Window Title Uploading File into Server from Local Machine Demo Window Title Uploading File into Server from Local Machine Demo     6. Create Stack Layout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN AM Definition messageComponentLayout   7. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   8. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Submit Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   9. Create Controller for page FileUploadPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.fileuploaddemo.webui Class Name: FileUploadCO   Write Following Code in FileUploadCO processFormRequest   import oracle.cabo.ui.data.DataObject; import java.io.FileOutputStream; import java.io.InputStream; import oracle.jbo.domain.BlobDomain; import java.io.File; import oracle.apps.fnd.framework.OAException; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) { super.processFormRequest(pageContext, webBean);    if(pageContext.getParameter("Submit")!=null)  {   upLoadFile(pageContext,webBean);      } }   -- Use Following Code if want to Upload Files in Local Machine -- ----------------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "D:\\PRajkumar";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload"); //FileUploading is my MessageFileUpload Bean Id   if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");  // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");  // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("D:\\PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   -- Use Following Code if want to Upload File into Server -- ------------------------------------------------------------------------- public void upLoadFile(OAPageContext pageContext,OAWebBean webBean) { String filePath = "/u01/app/apnac03r12/PRajkumar/";  System.out.println("Default File Path---->"+filePath);  String fileUrl = null;  try  {   DataObject fileUploadData =  pageContext.getNamedDataObject("MessageFileUpload");  //FileUploading is my MessageFileUpload Bean Id     if(fileUploadData!=null)   {    String uFileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   // include this line    String contentType = (String) fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");   // For Mime Type    System.out.println("User File Name---->"+uFileName);    FileOutputStream output = null;    InputStream input = null;    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, uFileName);    System.out.println("uploadedByteStream---->"+uploadedByteStream);                               File file = new File("/u01/app/apnac03r12/PRajkumar", uFileName);    System.out.println("File output---->"+file);    output = new FileOutputStream(file);    System.out.println("output----->"+output);    input = uploadedByteStream.getInputStream();    System.out.println("input---->"+input);    byte abyte0[] = new byte[0x19000];    int i;         while((i = input.read(abyte0)) > 0)    output.write(abyte0, 0, i);    output.close();    input.close();   }  }  catch(Exception ex)  {   throw new OAException(ex.getMessage(), OAException.ERROR);  }     }   10. Congratulation you have successfully finished. Run Your page and Test Your Work           -- Used Code to Upload files into Server   -- Before Upload files into Server     -- After Upload files into Server       -- Used Code to Upload files into Local Machine   -- Before Upload files into Local Machine       -- After Upload files into Local Machine

    Read the article

  • Creando controles personalizados para asp.net

    - by jaullo
    Si bien es cierto que asp.net contiene muchos controles que nos facilitan la vida, en muchas ocasiones requerimos funcionalidades adicionales. Una de las opciones es recurrir a la creación de controles personalizados. Este será el Primero de varios post que dedicare a mostrar como crear algunos controles personalizados utilizando elementos sumamente sencillos y faciles de entender. Para ello utilizaremos unicamente los regularexpressionvalidator y unas cuantas expresiones regulares. Para este ejemplo extenderemos la funcionalidad de un textbox para que valide números de tarjetas de crédito. Nuestro textbox deberá verificar que existan 16 números, en grupos de 4, separados por un - Entonces, creamos un nuevo proyecto de tipo control de servidor asp.net Primeramente importamos los espacios de nombres Imports System.ComponentModel Imports System.Web Imports System.Web.UI.WebControls Imports System.Web.UI   Segundo creamos nuestra clase Public Class TextboxCreditCardNumber end class Ahora,  le decimos a nuestra clase que vamos a heredar de textbox Public Class TextboxCreditCardNumber           Inherits TextBox end class Una vez que tenemos esto, nuestra base de programación esta lista, asi que vamos a codificar nuestra nueva funcionalidad Declaramos nuestra variables y una propiedad pública que contendrá el mensaje de error que debe ser devuelto al usuario, esta será publica para que pueda ser personalizada.    Private req As New RegularExpressionValidator     Private mstrmensaje As String = "Número de Tarjeta Invalido"     Public Property MensajeError() As String         Get             Return mstrmensaje         End Get         Set(ByVal value As String)             mstrmensaje = value         End Set     End Property   Ahora definimos el metodo OnInit de nuestro control, en el cual asignaremos las propiedad e inicializaremos nuestras funciones    Protected Overrides Sub OnInit(ByVal e As System.EventArgs)         req.ControlToValidate = MyBase.ID         req.ErrorMessage = mstrmensaje         req.Display = ValidatorDisplay.Dynamic         req.ValidationExpression = "^(\d{4}-){3}\d{4}$|^(\d{4} ){3}\d{4}$|^\d{16}$"         Controls.Add(New LiteralControl("&nbsp;"))         Controls.Add(req)         MyBase.OnInit(e)     End Sub   Y por último, definimos el evento render (que es el encarado de dibujar nuestro control) Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)         MyBase.Render(writer)         req.RenderControl(writer)     End Sub   Lo unico que nos queda ahora es compilar nuestra clase y añadir nuestro nuevo control al ToolBox de Controles para que pueda ser utilizado.

    Read the article

  • What is controlling the desktop display?

    - by Bart Silverstrim
    I have two Ubuntu systems and in the course of changing configurations something has become muddled. I have disabled Unity in favor of gnome shell, the older style display of the desktop. Then I installed xfce 4. Seemed everything would be working okay, and for the most part it does. Except I noticed that on one system there's something else controlling settings. On one, if I right click the desktop, I get the menu with the options: open in new window create launcher... create url link... create folder... create from template -> open terminal here paste desktop settings... properties... applications -> On system two, right clicking brings up the menu: Create new folder Create new document -> organize desktop by name keep aligned paste Change Desktop Background Additionally, even though I set the background with the xfce settings manager, on system two that background will appear for a few seconds before it's replaced by something that looks like a background from Ubuntu's original desktop. And it's being controlled by what comes up with the "change desktop background" when right clicking, which isn't the xfce settings manager. On the first system, that right click does bring up the xfce settings tool. In short, something is controlling/overriding the xfce settings on machine two, but I can't find what file or configuration tool is doing it. How can I get system two to behave as system one, giving control of settings and configuration of X to XFCE's tools?

    Read the article

  • E-Business Integration with SSO using AccessGate

    - by user774220
    Moving away from the legacy Oracle SSO, Oracle E-Business Suite (EBS) came up with EBS AccessGate as the way forward to provide Single Sign On with Oracle Access Manager (OAM). As opposed to AccessGate in OAM terminology, EBS AccessGate has no specific connection with OAM with respect to configuration. Instead, EBS AccessGate uses the header variables sent from the SSO system to create the native user-session, like any other SSO enabled web application. E-Business Suite Integration with Oracle Access Manager It is a known fact that E-Business suite requires Oracle Internet Directory (OID) as the user repository to enable Single Sign On. This is due to the fact that E-Business Suite needs to be registered with OID to for Single Sign On. Additionally, E-Business Suite uses “orclguid” in OID to map the Single Sign On user with the corresponding local user profile. During authentication, EBS AccessGate expects SSO system to return orclguid and EBS username (stored as a user-attribute in SSO user store) in two header variables USER_ORCLGUID and USER_NAME respectively. Following diagram depicts the authentication flow once SSO system returns EBS Username and orclguid after successful authentication: Topic to brainstorm: EBS AccessGate as a generic SSO enablement solution for E-Business Suite AccessGate Even though EBS AccessGate is suggested as an integration approach between OAM and Oracle E-Business Suite, this section attempts to look at EBS AccessGate as a generic solution approach to provide SSO to Oracle E-Business Suite using any Web SSO solution. From the above points, the only dependency on the SSO system is that it should be able to return the corresponding orclguid from the OID which is configured with the E-Business Suite. This can be achieved by a variety of approaches: By using the same OID referred by E-Business Suite as the Single Sign On user store. If SSO System is using a different user store then: Use DIP or OIM to synch orclsguid from E-Business Suite OID to SSO user store Use OVD to provide an LDAP view where orclguid from E-Business Suite OID is part of the user entity in the user store referred by SSO System

    Read the article

  • Persistence problem when installing USB Ubuntu variant using Windows

    - by Derek Redfern
    I'm part of a project called One2One2Go - we're developing a Live USB Ubuntu variant for use in schools in Somerville, MA. We have the project files compiled into an iso, and when installed using the native Ubuntu Startup Disk Creator, the USB works fine. When installed using a tool on a Windows machine (LiLi at linuxliveusb.com or liveusb-creator at fedorahosted.org/liveusb-creator), the USB works, but does not have persistence. This happens even if the creator is specifically set to allocate an area for persistent files. When comparing files on sticks created in Windows or Ubuntu, the one file that is different is syslinux/syslinux.cfg. I have printed the contents of the file below: Installed on Windows: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 Installed on Ubuntu: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 For troubleshooting reasons, I changed the Windows-created syslinux.cfg to match the one created by Ubuntu and persistence worked on it. I think the problem is that on the stick created by Windows, there is no "persistent" flag, but I don't know why. Is this a problem with our disk image or with the creator? How would I go about fixing this problem? Thanks in advance for your help. Derek Redfern

    Read the article

  • How to start a task before networking?

    - by user1252434
    I've written an upstart task that modifies /etc/network/interfaces. (Actually a file sourced into it.) Which start on condition do I need to declare to let my task run before any networking jobs? I've tried start on starting networking, but that's apparently too late. When I log in after booting I can see that the changes were written, but obviously they are not used: the new config states a static IP, but the boot process waits for a non-existing DHCP server (old config) to time out. I've also tried start on starting network-interface INTERFACE=eth0, which didn't work either. IIRC there was an error in the log that the change couldn't be written. Background: I need a VM template that can be cloned and the clones configured through a script. Among other settings, I need to give them a static IP address to access them from the host. I use guestfish to write a config file to one of the virtual disks and let a script apply these settings to the system. I don't want that disk to contain an actual system settings file. I can't modify /etc directly, because that disk is shared (copy-on-write/diff) among the clones and guestfish apparently doesn't support that type of image. I could also let them use DHCP and setup a server that assigns IP by MAC, but I'm afraid of the complexity. I could also add just another virtual disk for configuration files, but if possible I'd prefer to store settings directly on the system disk image. Used software: Ubuntu Server 12.04, VirtualBox. The configuration modifier is a self written ruby script.

    Read the article

  • Tip 13 : Kill a process using C#, from local to remote

    - by StanleyGu
    1. My first choice is always to try System.Diagnostics to kill a process 2. The first choice works very well in killing local processes. I thought the first choice should work for killing remote process too because process.kill() method is overloaded with second argument of machine name. I pass process name plus remote machine name and call the process.kill() method 3. Unfortunately, it gives me error message of "Feature is not supported for remote machines.". Apparently, you can query but not kill a remote process using Process class in System.Diagnostics. The MSDN library document explicitly states that about Process class: Provides access to local and remote processes and enables you to start and stop local system processes. 4. I try my second choice: using System.Management to kill a process running on a remote machine. Make sure add references to System.Management.dll and System.Management.Instrumentation.dll 5. The second choice works very well in killing a remote process. Just need to make sure the account running your program must be configured to have permission to kill a process running on the remote machine.  

    Read the article

  • How to implement early exit / return in Haskell?

    - by Giorgio
    I am porting a Java application to Haskell. The main method of the Java application follows the pattern: public static void main(String [] args) { if (args.length == 0) { System.out.println("Invalid number of arguments."); System.exit(1); } SomeDataType d = getData(arg[0]); if (!dataOk(d)) { System.out.println("Could not read input data."); System.exit(1); } SomeDataType r = processData(d); if (!resultOk(r)) { System.out.println("Processing failed."); System.exit(1); } ... } So I have different steps and after each step I can either exit with an error code, or continue to the following step. My attempt at porting this to Haskell goes as follows: main :: IO () main = do a <- getArgs if ((length args) == 0) then do putStrLn "Invalid number of arguments." exitWith (ExitFailure 1) else do -- The rest of the main function goes here. With this solution, I will have lots of nested if-then-else (one for each exit point of the original Java code). Is there a more elegant / idiomatic way of implementing this pattern in Haskell? In general, what is a Haskell idiomatic way to implement an early exit / return as used in an imperative language like Java?

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Installation Problems & Antivirus

    - by Sagar Walvekar
    We have just migrated our systems from Windows to Ubuntu. We have more than 50 systems previously running Windows XP & Windows 7. After the release of 12.04 we decided to switch to Ubuntu. While we were installing Ubuntu 12.04 on Pentium 4 systems we have faced some issues. We tried it with a USB flash driver & CD. The typical configuration of the system is as follows: Pentium 4 with HT & 512 MB RAM, 40/80 GB IDE / SATA HDD. In windows there were several partitions and data on all the drives in all the systems. We have addressed this by taking backup of the C: drive on other drives & while installing Ubuntu we have just deleted & the C drive & recreated Linux partitions as follows - /root , /Boot, /Home & Swap.(in 10 GB) The problem is that the Linux gets installed without any error. However when I restart the system after the installation the system hangs in while detecting the HDD in BIOS and the system fails to start until I remove the HDD. When I installed the previous Ubuntu version it worked fine, without any hassle. Also if we install Ubuntu 12.04 on same HDD on any other higher capacity system & reconnected to old system it works fine. How can I fix this problem? Also, is there is any Antivirus which will give me real time protection on ubuntu 11 & higher versions with both 32 & 64 Bits?

    Read the article

  • Why does TDD work?

    - by CesarGon
    Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: The "write test + refactor till pass" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The "refactor till pass" guideline is often taken as a mandate to forget architectural design and do whatever is necessary to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good "ilities" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an "all green" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as "the system is fine", when it really means "the system is as good as our testing strategy". Often, the testing strategy is not checked. Or, who tests the tests? I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you.

    Read the article

  • Permission denied on /dev/xvdf despite sudo?

    - by Sid
    I'm trying to get a stream off port 9999 and write to /dev/xvdf. I'm using Amazon EC2 with the Ubuntu 11.10 server image. By default I log in as 'ubuntu' which has sudo privileges. However, when I run the following netcat command, I get this error ubuntu@ip-10-252-35-122:~$ ls -al /dev/xv* brw-rw---- 1 root disk 202, 1 2011-11-30 22:22 /dev/xvda1 brw-rw---- 1 root disk 202, 80 2011-11-30 22:27 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo netcat -p 9999 -l > /dev/xvdf bash: /dev/xvdf: Permission denied ubuntu@ip-10-252-35-122:~$ Any idea why I get the permission denied error and how I can work around it? Update: Something mysterious is running in the background that resets the permissions? Check the snippet below and the permission flags seem to automatically reset when I try using /dev/xvdf !?! ubuntu@ip-10-252-35-122:~$ sudo chmod 777 /dev/xvdf ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brwxrwxrwx 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo nc -p 9999 -l > /dev/xvdf This is nc from the netcat-openbsd package. An alternative nc is available in the netcat-traditional package. usage: nc [-46DdhklnrStUuvzC] [-i interval] [-P proxy_username] [-p source_port] [-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [hostname] [port[s]] ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brw-rw---- 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ I'm using stock Ubuntu Amazon EC2 images from http://alestic.com/ (links to images at the top)

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • SD-CARD reader does not show in ubuntu

    - by shantanu
    I bought Acer asipre 4250. It have built-in SD card reader. But it is not working. Nothing show in /media or fdisk but something in dmesg. dmesg: new high-speed USB device number 3 using ehci_hcd [ 127.396733] scsi5 : usb-storage 2-2:1.0 [ 128.526562] scsi 5:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 128.532512] sd 5:0:0:0: Attached scsi generic sg2 type 0 [ 129.008110] ohci_hcd 0000:00:12.0: PCI INT A disabled [ 129.032083] ohci_hcd 0000:00:13.0: PCI INT A disabled [ 129.056411] ohci_hcd 0000:00:16.0: PCI INT A disabled [ 129.338026] sd 5:0:0:0: [sdb] Attached SCSI removable disk [ 129.808328] ohci_hcd 0000:00:14.5: PCI INT C disabled [ 167.728616] usb 2-2: USB disconnect, device number 3 [ 169.872284] ehci_hcd 0000:00:13.2: PCI INT B disabled [ 169.872340] ehci_hcd 0000:00:13.2: PME# enabled fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0006bc6d Device Boot Start End Blocks Id System /dev/sda1 * 2048 48828415 24413184 7 HPFS/NTFS/exFAT /dev/sda2 48828416 50829311 1000448 82 Linux swap / Solaris /dev/sda3 50829312 99657727 24414208 83 Linux /dev/sda4 99659774 625141759 262740993 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 99659776 275439615 87889920 7 HPFS/NTFS/exFAT /dev/sda6 275441664 451221503 87889920 7 HPFS/NTFS/exFAT /dev/sda7 451223552 625141759 86959104 7 HPFS/NTFS/exFAT I found another problem just right now. I format last three drives as EXT4 with disk utility. But they are showing as NTFS/exFAT in fdisk. :-(

    Read the article

  • Kingston SD reader not working for USB3

    - by user1146334
    I have a Kingston 4-in-1 Multimedia reader. When my PC was formatted with Win7 it worked fine. I decided to change to Ubuntu 14.04 and now it doesn't work. If I plug it into one of the USB2 ports it works fine, but whenever I plug it into one of the USB3 ports, it thinks about it for a minute and then dies. Here's the output of dmesg when it dies [110262.148656] usb 4-1: new SuperSpeed USB device number 3 using xhci_hcd [110262.170330] usb 4-1: Parent hub missing LPM exit latency info. Power management will be impacted. [110262.266379] usb 4-1: New USB device found, idVendor=11b0, idProduct=6348 [110262.266386] usb 4-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [110262.266390] usb 4-1: Product: USB3.0 Media Reader [110262.266394] usb 4-1: Manufacturer: Kingston [110262.266398] usb 4-1: SerialNumber: 08735314400198 [110262.272929] usb-storage 4-1:1.0: USB Mass Storage device detected [110262.273239] scsi15 : usb-storage 4-1:1.0 [110263.290056] scsi 15:0:0:0: Direct-Access FCR-HS3 -0 1.00 PQ: 0 ANSI: 4 [110263.306622] scsi 15:0:0:1: Direct-Access FCR-HS3 -1 1.00 PQ: 0 ANSI: 4 [110263.323292] scsi 15:0:0:2: Direct-Access FCR-HS3 -2 1.00 PQ: 0 ANSI: 4 [110263.339858] scsi 15:0:0:3: Direct-Access FCR-HS3 -3 1.00 PQ: 0 ANSI: 4 [110263.340332] sd 15:0:0:0: Attached scsi generic sg3 type 0 [110263.340706] sd 15:0:0:1: Attached scsi generic sg4 type 0 [110263.340850] sd 15:0:0:2: Attached scsi generic sg5 type 0 [110263.340975] sd 15:0:0:3: Attached scsi generic sg6 type 0 [110264.651847] sd 15:0:0:1: [sde] 31116288 512-byte logical blocks: (15.9 GB/14.8 GiB) [110264.667049] sd 15:0:0:1: [sde] Write Protect is off [110264.667055] sd 15:0:0:1: [sde] Mode Sense: 2f 00 00 00 [110264.682767] sd 15:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [110264.694975] sd 15:0:0:2: [sdf] Attached SCSI removable disk [110264.697933] sd 15:0:0:3: [sdg] Attached SCSI removable disk [110264.729918] sd 15:0:0:0: [sdd] Attached SCSI removable disk [110264.754189] sde: sde1 [110264.760114] sd 15:0:0:1: [sde] Attached SCSI removable disk [110275.377368] usb 4-1: reset SuperSpeed USB device number 3 using xhci_hcd [110275.398453] usb 4-1: Parent hub missing LPM exit latency info. Power management will be impacted. [110275.436592] xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff8802e4fb9980 [110275.436600] xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff8802e4fb99c0 [110277.263444] usb 4-1: USB disconnect, device number 3

    Read the article

  • 12.04 - How do I Fix Grub Error 15 on New Dual Boot Install

    - by Garth
    I just installed 12.04 to Dual Boot (separate partitions) with an existing Win 7. Upon reboot after install things freeze after Grub 1.5 with a Grub Error 15 message. Is there any easy way to fix this? (I am posting this from my second computer) UPDATE: I managed to boot into both 12.04 and Win7 using BIOS: Selected the disk with the Win7 'C' Partition: resulted in the same error message Rebooted, tried the disk with the Ubuntu Partitions: *Grub Menu loaded: Managed to boot 12.04, rebooted, used BIOS again: Managed to boot Win 7 So, I have access to my computer again (thru BIOS), but this has been a pretty crappy install experience. Garth I used the the Final release 12.04 Ubuntu install disk, reformatted all Linux partitions, and expected a simple clean install. Other than specifying the Ubuntu Partitions, I did a basic install of 12.04. No way I did do anything to get this crap error failure! I have no idea why my install resulted in a Grub-15 error. CLOSED - Answered my own Question: I burned a RescuTux Disk and used it to recover grub2 (simplest and easiest way for me. http://www.supergrubdisk.org/category/download/rescatuxdownloads/ Garth

    Read the article

  • Make Your Menu Item Highlighted

    - by Shaun
    When I was working on the TalentOn project (Promotion in MSDN Chinese) I was asked to implement a functionality that makes the top menu items highlighted when the currently viewing page was in that section. This might be a common scenario in the web application development I think.   Simple Example When thinking about the solution of the highlighted menu items the biggest problem would be how to define the sections (menu item) and the pages it belongs to rather than making the menu highlighted. With the ASP.NET MVC framework we can use the controller – action infrastructure for us to achieve it. Each controllers would have a related menu item on the master page normally. The menu item would be highlighted if any of the views under this controller are being shown. Some specific menu items would be highlighted of that action was invoked, for example the home page, the about page, etc. The check rule can be specified on-demand. For example I can define the action LogOn and Register of Account controller should make the Account menu item highlighted while the ChangePassword should make the Profile menu item highlighted. I’m going to use the HtmlHelper to render the highlight-able menu item. The key point is that I need to pass the predication to check whether the current view belongs to this menu item which means this menu item should be highlighted or not. Hence I need a delegate as its parameter. The simplest code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace ShaunXu.Blogs.HighlighMenuItem 9: { 10: public static class HighlightMenuItemHelper 11: { 12: public static MvcHtmlString HighlightMenuItem(this HtmlHelper helper, 13: string text, string controllerName, string actionName, object routeData, object htmlAttributes, 14: string highlightText, object highlightHtmlAttributes, 15: Func<HtmlHelper, bool> highlightPredicate) 16: { 17: var shouldHighlight = highlightPredicate.Invoke(helper); 18: if (shouldHighlight) 19: { 20: return helper.ActionLink(string.IsNullOrWhiteSpace(highlightText) ? text : highlightText, 21: actionName, controllerName, routeData, highlightHtmlAttributes == null ? htmlAttributes : highlightHtmlAttributes); 22: } 23: else 24: { 25: return helper.ActionLink(text, actionName, controllerName, routeData, htmlAttributes); 26: } 27: } 28: } 29: } There are 3 groups of the parameters: the first group would be the same as the in-build ActionLink method parameters. It has the link text, controller name and action name, etc passed in so that I can render a valid linkage for the menu item. The second group would be more focus on the highlight link text and Html attributes. I will use them to render the highlight menu item. The third group, which contains one parameter, would be a predicate that tells me whether this menu item should be highlighted or not based on the user’s definition. And then I changed my master page of the sample MVC application. I let the Home and About menu highlighted only when the Index and About action are invoked. And I added a new menu named Account which should be highlighted for all actions/views under its Account controller. So my master would be like this. 1: <div id="menucontainer"> 2:  3: <ul id="menu"> 4: <li><% 1: : Html.HighlightMenuItem( 2: "Home", "Home", "Index", null, null, 3: "[Home]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "Index")%></li> 5:  6: <li><% 1: : Html.HighlightMenuItem( 2: "About", "Home", "About", null, null, 3: "[About]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "About")%></li> 7:  8: <li><% 1: : Html.HighlightMenuItem( 2: "Account", "Account", "LogOn", null, null, 3: "[Account]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Account")%></li> 9: 10: </ul> 11:  12: </div> Note: You need to add the import section for the namespace “ShaunXu.Blogs.HighlighMenuItem” to make the extension method I created below available. So let’s see the result. When the home page was shown the Home menu was highlighted since at this moment it was controller = Home and action = Index. And if I clicked the About menu you can see it turned highlighted as now the action was About. And if I navigated to the register page the Account menu was highlighted since it should be like that when any actions under the Account controller was invoked.   Fluently Language Till now it’s a fully example for the highlight menu item but not perfect yet. Since the most common scenario would be: highlighted when the action invoked, or highlighted when any action was invoked under this controller, we can created 2 shortcut method so for them so that normally the developer will be no need to specify the delegation. Another place we can improve would be, to make the method more user-friendly, or I should say developer-friendly. As you can see when we want to add a highlight menu item we need to specify 8 parameters and we need to remember what they mean. In fact we can make the method more “fluently” so that the developer can have the hints when using it by the Visual Studio IntelliSense. Below is the full code for it. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace Ethos.Xrm.HR 9: { 10: #region Helper 11:  12: public static class HighlightActionMenuHelper 13: { 14: public static IHighlightActionMenuProviderAfterCreated HighlightActionMenu(this HtmlHelper helper) 15: { 16: return new HighlightActionMenuProvider(helper); 17: } 18: } 19:  20: #endregion 21:  22: #region Interfaces 23:  24: public interface IHighlightActionMenuProviderAfterCreated 25: { 26: IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName); 27: } 28:  29: public interface IHighlightActionMenuProviderAfterOn 30: { 31: IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes); 32: } 33:  34: public interface IHighlightActionMenuProviderAfterWith 35: { 36: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate); 37: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch(); 38: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch(); 39: } 40:  41: public interface IHighlightActionMenuProviderAfterHighlightWhen 42: { 43: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText); 44: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes); 45: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText); 46: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass); 47: } 48:  49: public interface IHighlightActionMenuProviderAfterApplyHighlightStyle 50: { 51: MvcHtmlString ToActionLink(); 52: } 53:  54: #endregion 55:  56: public class HighlightActionMenuProvider : 57: IHighlightActionMenuProviderAfterCreated, 58: IHighlightActionMenuProviderAfterOn, IHighlightActionMenuProviderAfterWith, 59: IHighlightActionMenuProviderAfterHighlightWhen, IHighlightActionMenuProviderAfterApplyHighlightStyle 60: { 61: private HtmlHelper _helper; 62:  63: private string _controllerName; 64: private string _actionName; 65: private string _text; 66: private object _routeData; 67: private object _htmlAttributes; 68:  69: private Func<HtmlHelper, bool> _highlightPredicate; 70:  71: private string _highlightText; 72: private object _highlightHtmlAttributes; 73:  74: public HighlightActionMenuProvider(HtmlHelper helper) 75: { 76: _helper = helper; 77: } 78:  79: public IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName) 80: { 81: _actionName = actionName; 82: _controllerName = controllerName; 83: return this; 84: } 85:  86: public IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes) 87: { 88: _text = text; 89: _routeData = routeData; 90: _htmlAttributes = htmlAttributes; 91: return this; 92: } 93:  94: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate) 95: { 96: _highlightPredicate = predicate; 97: return this; 98: } 99:  100: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch() 101: { 102: return HighlightWhen((helper) => 103: { 104: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower(); 105: }); 106: } 107:  108: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch() 109: { 110: return HighlightWhen((helper) => 111: { 112: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower() && 113: helper.ViewContext.RouteData.Values["action"].ToString().ToLower() == _actionName.ToLower(); 114: }); 115: } 116:  117: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText) 118: { 119: _highlightText = highlightText; 120: _highlightHtmlAttributes = highlightHtmlAttributes; 121: return this; 122: } 123:  124: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes) 125: { 126: return ApplyHighlighStyle(highlightHtmlAttributes, _text); 127: } 128:  129: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText) 130: { 131: return ApplyHighlighStyle(new { @class = cssClass }, highlightText); 132: } 133:  134: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass) 135: { 136: return ApplyHighlighStyle(new { @class = cssClass }, _text); 137: } 138:  139: public MvcHtmlString ToActionLink() 140: { 141: if (_highlightPredicate.Invoke(_helper)) 142: { 143: // should be highlight 144: return _helper.ActionLink(_highlightText, _actionName, _controllerName, _routeData, _highlightHtmlAttributes); 145: } 146: else 147: { 148: // should not be highlight 149: return _helper.ActionLink(_text, _actionName, _controllerName, _routeData, _htmlAttributes); 150: } 151: } 152: } 153: } So in the master page when I need the highlight menu item I can “tell” the helper how it should be, just like this. 1: <li> 2: <% 1: : Html.HighlightActionMenu() 2: .On("Index", "Home") 3: .With(SiteMasterStrings.Home, null, null) 4: .HighlightWhenControllerMatch() 5: .ApplyHighlighStyle(new { style = "background:url(../../Content/Images/topmenu_bg.gif) repeat-x;text-decoration:none;color:#feffff;" }) 6: .ToActionLink() %> 3: </li> While I’m typing the code the IntelliSense will advise me that I need a highlight action menu, on the Index action of the Home controller, with the “Home” as its link text and no need the additional route data and Html attributes, and it should be highlighted when the controller was “Home”, and if it’s highlighted the style should be like this and finally render it to me. This is something we call “Fluently Language”. If you had been using Moq you will see that’s very development-friendly, document-ly and easy to read.   Summary In this post I demonstrated how to implement a highlight menu item in ASP.NET MVC by using its controller – action infrastructure. We can see the ASP.NET MVC helps us to organize our web application better. And then I also told a little bit more on the “Fluently Language” and showed how it will make our code better and easy to be used.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Magento 1.6.2 Catalog Price Rule Problem

    - by robgt
    My Magento system seems to have a slight issue with Catalog Price Rule application. As far as the customer is concerned, all is working perfectly. The problem is that some orders are not being displayed properly in the admin system when I look at the details. The Catalog Price rule appears to not be applied - so when we reconcile our card processor details with those in our backend Sage system, numbers are not tallying up. Magento and out Sage system say the customer paid X, but the card issuer has taken payment of Y. The payment amount is correct due to the Catalog Price Rule. The customer is always paying the correct amount, but because of some issue with Magento, I think the data is possibly not being stored correctly (stored without the catalog price rule discount amount applied). This means that when I look at an order in the admin system, the line item prices that should be affected by the catalog price rule are not - but also the prices in our backend Sage system are incorrect too. We use another piece of software to bring the data into Sage from Magento, so the data must be stored in Magento's database incorrectly somewhere as this software reads out the order information from Magento. Does anyone have any idea what is wrong here, and how it might be fixed? Cheers!

    Read the article

  • Cannot boot computer, help please :(

    - by Austin
    Ok so here is more descriptions, I REALLY NEED HELP! I installed Ubuntu on a disk (unsure of v. I can't check.. Computer is broke :/ ) I then burned the disk I restarted computer pressed f12 blah blah it came up, I pressed enter for English, I plugged an Ethernet card in for better connection, and then started the installation.... I did the basics, and I entered a host name, and a user name and pass all that, when it came to partition page, I presse enter for no and then it went back to the installation page, I pressed install again, and it brought me back, so I selected yes , it loaded an it asked what software to instal I selected sh ( it was first option and I didn't know which to do ) so after everything finished loading, my disk was ejected the process finish, I run my computer and it opens up in log in page on cmd script I didn't know it, so I ran disk again makin a user and pass to remember.... Going back to login thing I logged in, and all it did was say my username... I didn't know what to do from there, I looked online found nothing but grub stuff, so I went to "Advanced Ubuntu options" and then "(Recovery Mode) and went to update grub... In middle o loading it asked to continue (Y/n) I typed Y and it finished, it went back to the options located in (Recovery Mode) and I closed it, and it loaded... And then showed up in another script screen, saying a paragraph on top, and on left side ( where login: would be located etc. ) it says "Grub" and I don't know what to type, if I turn off computer and turn back on it just goes straight to that again... PLEASE HELP:(

    Read the article

  • Stuck With grub rescue> console

    - by rej santos
    Here is the background of my issue: I just recently installed the latest version of Ubuntu along side with Windows 8 Enterprise. However upon checking the disk size, it seems that some of the portions of the hd memory were gone so I decided to check the disk partition and have seen that is was being used by another file system. Thinking that the Ubuntu takes it boots stuff from my drive C: , I deleted that partition and formated so I can use it to store some movies, music etc. Now, as I switch on my machine, I am stucked with: error: unknown filesystem grub rescue> I googled a lot and saw the following command which seems no to me like sudo, chainloader etc, all of these command only returns unknown command in the console. What I just wanted is to boot from my Windows 8 OS. Just to add, I can't open the BIOS menu so I could choose what media to boot. As I open my machine it automatically takes me to grub rescue console. Here are the thing I already have: Ubuntu Installation Disk Windows 8 System repair Disk I just don't know how to boot into these things. Let me know what to do.

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >