Search Results

Search found 43642 results on 1746 pages for 'oracle public sector'.

Page 368/1746 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Should I use a collection here?

    - by Eva
    So I have code set up like this: public interface IInterface { public void setField(Object field); } public abstract class AbstractClass extends JPanel implements IInterface { private Object field_; public void setField(Object field) { field_ = field; } } public class ClassA extends AbstractClass { public ClassA() { // unique ClassA constructor stuff } public Dimension getPreferredSize() { return new Dimension(1, 1); } } public class ClassB extends AbstractClass { public ClassB() { // unique ClassB constructor stuff } public Dimension getPreferredSize() { return new Dimension(42, 42); } } public class ConsumerA { public ConsumerA(Collection<AbstractClass> collection) { for (AbstractClass abstractClass : collection) { abstractClass.setField(this); abstractClass.repaint(); } } } All hunky-dory so far, until public class ConsumerB { // Option 1 public ConsumerB(ClassA a, ClassB b) { methodThatOnlyTakesA(a); methodThatOnlyTakesB(b); } // Option 2 public ConsumerB(Collection<AbstractClass> collection) { for (IInterface i : collection) { if (i instanceof ClassA) { methodThatOnlyTakesA((ClassA) i); else if (i instanceof ClassB) { methodThatOnlyTakesB((ClassB) i); } } } } public class UsingOption1 { public static void main(String[] args) { ClassA a = new ClassA(); ClassB b = new ClassB(); Collection<AbstractClass> collection = Arrays.asList(a, b); ConsumerA consumerA = new ConsumerA(collection); ConsumerB consumerB = new ConsumerB(a, b); } } public class UsingOption2 { public static void main(String[] args) { Collection<AbstractClass> collection = Arrays.asList(new ClassA(), new ClassB()); ConsumerA = new ConsumerA(collection); ConsumerB = new ConsumerB(collection); } } With a lot more classes extending AbstractClass, both options get unwieldly. Option1 would make the constructor of ConsumerB really long. Also UsingOption1 would get long too. Option2 would have way more if statements than I feel comfortable with. Is there a viable Option3? If it helps, ClassA and ClassB have all the same methods, they're just implemented differently. Thanks for slogging through my code!

    Read the article

  • Calling an Oracle PL/SQL procedure with Custom Object return types from 0jdbc6 JDBCthin drivers

    - by Andrew Harmel-Law
    I'm writing some JDBC code which calls a Oracle 11g PL/SQL procdedure which has a Custom Object return type. Whenever I try an register my return types, I get either ORA-03115 or PLS-00306 as an error when the statement is executed depending on the type I set. An example is below: PLSQL Code: Procedure GetDataSummary (p_my_key IN KEYS.MY_KEY%TYPE, p_recordset OUT data_summary_tab, p_status OUT VARCHAR2); Java Code: String query = "beginmanageroleviewdata.getdatasummary(?, ?, ?); end;"); CallableStatement stmt = conn.prepareCall(query); stmt.setInt(1, 83); stmt.registerOutParameter(2, OracleTypes.CURSOR); // Causes error: PLS-00306 stmt.registerOutParameter(3, OracleTypes.VARCHAR); stmt.execute(stmt); // Error mentioned above thrown here. Can anyone provide me with an example showing how I can do this? I guess it's possible. However I can't see a rowset OracleType. CURSOR, REF, DATALINK, and more fail. Apologies if this is a dumb question. I'm not a PL/SQL expert and may have used the wrong terminology in some areas of my question. (If so, please edit me). Thanks in advance. Regs, Andrew

    Read the article

  • NHibernate Oracle stored procedure problem

    - by Mr. Flint
    ------Using VS2008, ASP.Net with C#, Oracle, NHibernate---- I have tested my stored procedure. It's working but not with NHibernate. Here are the codes: Procedure : create or replace procedure ThanaDelete (id number) as begin delete from thana_tbl where thana_code = id; end Mapping File: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DataTransfer" namespace="DataTransfer"> <class name="DataTransfer.Models.Thana, DataTransfer" table="THANA_TBL"> <id name="THANA_CODE" column="THANA_CODE" type="Int32" unsaved-value="0"> <generator class="native"> <param name="sequence"> SEQ_TEST </param> </generator> </id> <property name="THANA_NAME" column="THANA_NAME" type="string" not-null="false"/> <property name="DISTRICT_CODE" column="DISTRICT_CODE" type="Int32" not-null="false"/> <property name="USER_ID" column="USER_ID" type="string" not-null="false"/> <property name="TRANSACTION_DATE" column="TRANSACTION_DATE" type="Date" not-null="false"/> <property name="TRANSACTION_TIME" column="TRANSACTION_TIME" type="string" not-null="false"/> <sql-delete>exec THANADELETE ? </sql-delete> </class> </hibernate-mapping> error: Message: could not delete: [DataTransfer.Models.Thana#10][SQL: exec THANADELETE ?] Source: NHibernate Inner Exception System.Data.OracleClient.OracleException Message: ORA-00900: invalid SQL statement

    Read the article

  • why DataColumn AllowDbNull is true even if oracle db does not allow null

    - by matti
    Hi. I have column SomeId in table SomeLink. When I look with tOra or Sql Plus Worksheet both state: tOra: Column name Data type Default Null Comment SOMEID INTEGER {null} NOT NULL {null} Sql Plus: SOMEID NOT NULL NUMBER(38) I have authored a method that's intended to give default values to all NOT NULL fields that don't have values: public static void GetDefaultValuesForNonNullColumns(DataRow row) { foreach(DataColumn col in row.Table.Columns) { if (Convert.IsDBNull(row[col]) && !col.AllowDBNull) { if (ColumnIsNumeric(col.DataType)) row[col] = 0; else if (col.DataType == typeof(DateTime)) row[col] = DateTime.Now; else if (col.DataType == typeof(String)) row[col] = string.Empty; else if (col.DataType == typeof(Char)) row[col] = ' '; else throw new Exception(string.Format("Unsupported column type: {0}", col.DataType)); } } } When SOMEID is handled in loop the AllowDBNull = true. I really can't understand. The table is created in DataSet like this: _someLinkAdptr = _dbFactory.CreateDataAdapter(); _someLinkAdptr.SelectCommand = _dbFactory.CreateCommand(); _someLinkAdptr.SelectCommand.Connection = _cnctn; _someLinkAdptr.SelectCommand.CommandText = GetSomeLinkSelectTxtAndParams(_someLinkAdptr.SelectCommand, UndefinedValue.ToString(), UndefinedValue.ToString()); Select command returns no rows. The idea is that I can then use commandbuilder to get InsertCommand without building it myself. The row is added to dataset's table like this: private static void CreateDocLink(int anId, int anotherId) { DataRow row = _someDataSet.Tables["SomeLink"].NewRow(); row["AnId"] = anId; row["AnotherId"] = anotherId; Utility.GetDefaultValuesForNonNullColumns(row); _someDataSet.Tables["SomeLink"].Rows.Add(row); } When DataAdapter is updated to oracle db I get: ORA-01400: cannot insert NULL into (SOMESCHEMA.SOMELINK.SOMEID) Cheers & BR -Matti

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • Error while converting function from oracle to SQL Server

    - by sss
    Hi, I am migrating a function from Oracle to SQL Server 2008. This function raises SELECT statements included within a function cannot return data to a client as error. How can I solve this problem? Original PLSQL Code CREATE OR REPLACE function f_birim_cevrim_katsayi (p_ID_MAMUL in number, p_ID_BIRIMDEN in number, p_ID_BIRIME in number) return number is v_katsayi number; begin v_katsayi:=0; if p_ID_BIRIMDEN!=p_ID_BIRIME then for c in ( select * from CR_BIRIM_CEVRIM where ID_MAMUL = p_ID_MAMUL and ( (ID_BIRIM = p_ID_BIRIMDEN and ID_BIRIM2 = p_ID_BIRIME) OR ( ID_BIRIM2 = p_ID_BIRIMDEN and ID_BIRIM = p_ID_BIRIME) ) and VALID = 1) loop if c.ID_BIRIM=p_ID_BIRIMDEN then v_katsayi:=c.MT_ORAN; else v_katsayi:=1/c.MT_ORAN; end if; end loop; else v_katsayi:=1; end if; return round(v_katsayi,10); exception when others then return 0; end; T-SQL code: If Exists ( SELECT name FROM sysobjects WHERE name = 'f_birim_cevrim_katsayi' AND type = 'FN') DROP FUNCTION f_birim_cevrim_katsayi GO CREATE FUNCTION f_birim_cevrim_katsayi ( @p_ID_MAMUL FLOAT , @p_ID_BIRIMDEN FLOAT , @p_ID_BIRIME FLOAT ) RETURNS float AS BEGIN DECLARE @adv_error INT DECLARE @v_katsayi FLOAT SELECT @v_katsayi = 0 IF @p_ID_BIRIMDEN != @p_ID_BIRIME BEGIN DECLARE cursor_for_inline_select1 CURSOR LOCAL FOR SELECT * FROM CR_BIRIM_CEVRIM WHERE ID_MAMUL = @p_ID_MAMUL AND ((ID_BIRIM = @p_ID_BIRIMDEN AND ID_BIRIM2 = @p_ID_BIRIME) OR (ID_BIRIM2 = @p_ID_BIRIMDEN AND ID_BIRIM = @p_ID_BIRIME)) AND VALID = 1 OPEN cursor_for_inline_select1 FETCH NEXT FROM cursor_for_inline_select1 WHILE (@@FETCH_STATUS <> -1) BEGIN IF c.ID_BIRIM = @p_ID_BIRIMDEN BEGIN SELECT @v_katsayi = c.MT_ORAN END ELSE BEGIN SELECT @v_katsayi = 1/c.MT_ORAN END END CLOSE cursor_for_inline_select1 DEALLOCATE cursor_for_inline_select1 END ELSE BEGIN SELECT @v_katsayi = 1 END DEALLOCATE cursor_for_inline_select1 return ROUND(@v_katsayi, 10) GOTO ExitLabel1 Exception1: BEGIN DEALLOCATE cursor_for_inline_select1 return 0 END ExitLabel1: return ROUND(@v_katsayi, 10) END GO

    Read the article

  • How to "wrap" implementation in C#

    - by igor
    Hello, I have these classes in C# (.NET Framework 3.5) described below: public class Base { public int State {get; set;} public virtual int Method1(){} public virtual string Method2(){} ... public virtual void Method10(){} } public class B: Base { // some implementation } public class Proxy: Base { private B _b; public class Proxy(B b) { _b = b; } public override int Method1() { if (State == Running) return _b.Method1(); else return base.Method1(); } public override string Method2() { if (State == Running) return _b.Method2(); else return base.Method2(); } public override void Method10() { if (State == Running) _b.Method10(); else base.Method10(); } } I want to get something this: public Base GetStateDependentImplementation() { if (State == Running) // may be some other rule return _b; else return base; // compile error } and my Proxy's implementation will be: public class Proxy: Base { ... public override int Method1() { return GetStateDependentImplementation().Method1(); } public override string Method2() { return GetStateDependentImplementation().Method2(); } ... } Of course, I can do this (aggregation of base implementation): public RepeaterOfBase: Base // no any overrides, just inheritance { } public class Proxy: Base { private B _b; private RepeaterOfBase _Base; public class Proxy(B b, RepeaterOfBase aBase) { _b = b; _base = aBase; } } ... public Base GetStateDependentImplementation() { if (State == Running) return _b; else return _Base; } ... But instance of Base class is very huge and I have to avoid to have other additional copy in memory. So I have to simplify my code have to "wrap" implementation have to avoid a code duplication have to avoid aggregation of any additional instance of Base class (duplication) Is it possible to reach these goals?

    Read the article

  • Error while converting function from oracle to mssql

    - by sss
    Hi, I am migrating a function from oracle to mssql 2008.This function raises Select statements included within a function cannot return data to a client as error.How can i solve this problem? Original PLSQL Code CREATE OR REPLACE function f_birim_cevrim_katsayi (p_ID_MAMUL in number, p_ID_BIRIMDEN in number, p_ID_BIRIME in number) return number is v_katsayi number; begin v_katsayi:=0; if p_ID_BIRIMDEN!=p_ID_BIRIME then for c in ( select * from CR_BIRIM_CEVRIM where ID_MAMUL = p_ID_MAMUL and ( (ID_BIRIM = p_ID_BIRIMDEN and ID_BIRIM2 = p_ID_BIRIME) OR ( ID_BIRIM2 = p_ID_BIRIMDEN and ID_BIRIM = p_ID_BIRIME) ) and VALID = 1) loop if c.ID_BIRIM=p_ID_BIRIMDEN then v_katsayi:=c.MT_ORAN; else v_katsayi:=1/c.MT_ORAN; end if; end loop; else v_katsayi:=1; end if; return round(v_katsayi,10); exception when others then return 0; end; TSQL CODE If Exists ( SELECT name FROM sysobjects WHERE name = 'f_birim_cevrim_katsayi' AND type = 'FN') DROP FUNCTION f_birim_cevrim_katsayi GO CREATE FUNCTION f_birim_cevrim_katsayi ( @p_ID_MAMUL FLOAT , @p_ID_BIRIMDEN FLOAT , @p_ID_BIRIME FLOAT ) RETURNS float AS BEGIN DECLARE @adv_error INT DECLARE @v_katsayi FLOAT SELECT @v_katsayi = 0 IF @p_ID_BIRIMDEN != @p_ID_BIRIME BEGIN DECLARE cursor_for_inline_select1 CURSOR LOCAL FOR SELECT * FROM CR_BIRIM_CEVRIM WHERE ID_MAMUL = @p_ID_MAMUL AND ((ID_BIRIM = @p_ID_BIRIMDEN AND ID_BIRIM2 = @p_ID_BIRIME) OR (ID_BIRIM2 = @p_ID_BIRIMDEN AND ID_BIRIM = @p_ID_BIRIME)) AND VALID = 1 OPEN cursor_for_inline_select1 FETCH NEXT FROM cursor_for_inline_select1 WHILE (@@FETCH_STATUS <> -1) BEGIN IF c.ID_BIRIM = @p_ID_BIRIMDEN BEGIN SELECT @v_katsayi = c.MT_ORAN END ELSE BEGIN SELECT @v_katsayi = 1/c.MT_ORAN END END CLOSE cursor_for_inline_select1 DEALLOCATE cursor_for_inline_select1 END ELSE BEGIN SELECT @v_katsayi = 1 END DEALLOCATE cursor_for_inline_select1 return ROUND(@v_katsayi, 10) GOTO ExitLabel1 Exception1: BEGIN DEALLOCATE cursor_for_inline_select1 return 0 END ExitLabel1: return ROUND(@v_katsayi, 10) END GO

    Read the article

  • Create an Oracle function that returns a table

    - by Craig
    I'm trying to create a function in package that returns a table. I hope to call the function once in the package, but be able to re-use its data mulitple times. While I know I create temp tables in Oracle, I was hoping to keep things DRY. So far, this is what I have: Header: CREATE OR REPLACE PACKAGE TEST AS TYPE MEASURE_RECORD IS RECORD ( L4_ID VARCHAR2(50), L6_ID VARCHAR2(50), L8_ID VARCHAR2(50), YEAR NUMBER, PERIOD NUMBER, VALUE NUMBER ); TYPE MEASURE_TABLE IS TABLE OF MEASURE_RECORD; FUNCTION GET_UPS( TIMESPAN_IN IN VARCHAR2 DEFAULT 'MONTLHY', STARTING_DATE_IN DATE, ENDING_DATE_IN DATE ) RETURN MEASURE_TABLE; END TEST; Body: CREATE OR REPLACE PACKAGE BODY TEST AS FUNCTION GET_UPS ( TIMESPAN_IN IN VARCHAR2 DEFAULT 'MONTLHY', STARTING_DATE_IN DATE, ENDING_DATE_IN DATE ) RETURN MEASURE_TABLE IS T MEASURE_TABLE; BEGIN SELECT ... INTO T FROM ... ; RETURN T; END GET_UPS; END TEST; The header compiles, the body does not. One error message is 'not enough values', which probably means that I should be selecting into the MEASURE_RECORD, rather than the MEASURE_TABLE. What am I missing?

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • When I shutdown the computer, it restarts

    - by Prabu
    I am unable to shutdown. Whenever I try to shutdown, it reboots. I am running Ubuntu 12.10. I have run the boot-repair and this is the result: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info November 20th 2012] ============================= Boot Info Summary: =============================== => Grub2 (v2.00) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos1)/boot/grub. sda1: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.10 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/i386-pc/core.img sda2: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 2,048 1,936,809,983 1,936,807,936 83 Linux /dev/sda2 1,936,812,030 1,953,523,711 16,711,682 5 Extended /dev/sda5 1,936,812,032 1,953,523,711 16,711,680 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 229a5484-7659-4ce1-98ce-2f05f61a1ffa ext4 /dev/sda5 6c6dca25-ab67-4de4-8602-26fdb6154781 swap /dev/sr0 iso9660 Ubuntu 12.10 amd64 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) =========================== sda1/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-229a5484-7659-4ce1-98ce-2f05f61a1ffa' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi linux /boot/vmlinuz-3.5.0-19-generic root=UUID=229a5484-7659-4ce1-98ce-2f05f61a1ffa ro quiet splash acpi=force $vt_handoff initrd /boot/initrd.img-3.5.0-19-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-229a5484-7659-4ce1-98ce-2f05f61a1ffa' { menuentry 'Ubuntu, with Linux 3.5.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-19-generic-advanced-229a5484-7659-4ce1-98ce-2f05f61a1ffa' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi echo 'Loading Linux 3.5.0-19-generic ...' linux /boot/vmlinuz-3.5.0-19-generic root=UUID=229a5484-7659-4ce1-98ce-2f05f61a1ffa ro quiet splash acpi=force $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-19-generic } menuentry 'Ubuntu, with Linux 3.5.0-19-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-19-generic-recovery-229a5484-7659-4ce1-98ce-2f05f61a1ffa' { recordfail insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi echo 'Loading Linux 3.5.0-19-generic ...' linux /boot/vmlinuz-3.5.0-19-generic root=UUID=229a5484-7659-4ce1-98ce-2f05f61a1ffa ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-19-generic } menuentry 'Ubuntu, with Linux 3.5.0-17-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-advanced-229a5484-7659-4ce1-98ce-2f05f61a1ffa' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=229a5484-7659-4ce1-98ce-2f05f61a1ffa ro quiet splash acpi=force $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } menuentry 'Ubuntu, with Linux 3.5.0-17-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-recovery-229a5484-7659-4ce1-98ce-2f05f61a1ffa' { recordfail insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=229a5484-7659-4ce1-98ce-2f05f61a1ffa ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 229a5484-7659-4ce1-98ce-2f05f61a1ffa else search --no-floppy --fs-uuid --set=root 229a5484-7659-4ce1-98ce-2f05f61a1ffa fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda1/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=229a5484-7659-4ce1-98ce-2f05f61a1ffa / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=6c6dca25-ab67-4de4-8602-26fdb6154781 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda1: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 200.155235291 = 214.915047424 boot/grub/grub.cfg 1 40.280788422 = 43.251167232 boot/initrd.img-3.5.0-17-generic 1 2.468288422 = 2.650304512 boot/initrd.img-3.5.0-19-generic 1 200.149234772 = 214.908604416 boot/vmlinuz-3.5.0-17-generic 1 1.990135193 = 2.136891392 boot/vmlinuz-3.5.0-19-generic 1 2.468288422 = 2.650304512 initrd.img 1 1.990135193 = 2.136891392 vmlinuz 1 1.990135193 = 2.136891392 vmlinuz.old 1 =============================== StdErr Messages: =============================== cat: write error: Broken pipe File descriptor 8 (/proc/6297/mounts) leaked on lvscan invocation. Parent PID 13390: bash No volume groups found ADDITIONAL INFORMATION : =================== log of boot-repair 2012-12-17__01h53 =================== boot-repair version : 3.197~ppa1~quantal boot-sav version : 3.197~ppa1~quantal glade2script version : 3.2.2~ppa45~quantal boot-sav-extra version : 3.197~ppa1~quantal boot-repair is executed in live-session (Ubuntu 12.10, quantal, Ubuntu, x86_64) CPU op-mode(s): 32-bit, 64-bit file=/cdrom/preseed/ubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- maybe-ubiquity =================== os-prober: /dev/sda1:Ubuntu 12.10 (12.10):Ubuntu:linux =================== blkid: /dev/loop0: TYPE="squashfs" /dev/sr0: LABEL="Ubuntu 12.10 amd64" TYPE="iso9660" /dev/sda1: UUID="229a5484-7659-4ce1-98ce-2f05f61a1ffa" TYPE="ext4" /dev/sda5: UUID="6c6dca25-ab67-4de4-8602-26fdb6154781" TYPE="swap" 1 disks with OS, 1 OS : 1 Linux, 0 MacOS, 0 Windows, 0 unknown type OS. Warning: extended partition does not start at a cylinder boundary. DOS and Linux will interpret the contents differently. =================== sda1/etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=force" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" =================== sda1/etc/grub.d/ : drwxr-xr-x 2 root root 4096 Oct 17 14:59 grub.d total 72 -rwxr-xr-x 1 root root 7541 Oct 14 17:36 00_header -rwxr-xr-x 1 root root 5488 Oct 4 09:30 05_debian_theme -rwxr-xr-x 1 root root 10891 Oct 14 17:36 10_linux -rwxr-xr-x 1 root root 10258 Oct 14 17:36 20_linux_xen -rwxr-xr-x 1 root root 1688 Oct 11 14:10 20_memtest86+ -rwxr-xr-x 1 root root 10976 Oct 14 17:36 30_os-prober -rwxr-xr-x 1 root root 1426 Oct 14 17:36 30_uefi-firmware -rwxr-xr-x 1 root root 214 Oct 14 17:36 40_custom -rwxr-xr-x 1 root root 216 Oct 14 17:36 41_custom -rw-r--r-- 1 root root 483 Oct 14 17:36 README =================== UEFI/Legacy mode: This live-session is not in EFI-mode. SecureBoot maybe enabled. =================== PARTITIONS & DISKS: sda1 : sda, not-sepboot, grubenv-ok grub2, grub-pc , update-grub, 64, with-boot, is-os, not--efi--part, fstab-without-boot, fstab-without-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, apt-get, grub-install, with--usr, fstab-without-usr, not-sep-usr, standard, farbios, /mnt/boot-sav/sda1. sda : not-GPT, BIOSboot-not-needed, has-no-EFIpart, not-usb, has-os, 2048 sectors * 512 bytes =================== parted -l: Model: ATA ST1000DM003-1CH1 (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 992GB 992GB primary ext4 boot 2 992GB 1000GB 8556MB extended 5 992GB 1000GB 8556MB logical linux-swap(v1) Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: Can't have a partition outside the disk! =================== parted -lm: BYT; /dev/sda:1000GB:scsi:512:4096:msdos:ATA ST1000DM003-1CH1; 1:1049kB:992GB:992GB:ext4::boot; 2:992GB:1000GB:8556MB:::; 5:992GB:1000GB:8556MB:linux-swap(v1)::; Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: Can't have a partition outside the disk! =================== mount: /cow on / type overlayfs (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) /dev/sr0 on /cdrom type iso9660 (ro,noatime) /dev/loop0 on /rofs type squashfs (ro,noatime) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) gvfsd-fuse on /run/user/ubuntu/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=ubuntu) /dev/sda1 on /mnt/boot-sav/sda1 type ext4 (rw) =================== ls: /sys/block/sda (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda5 size slaves stat subsystem trace uevent /sys/block/sr0 (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro size slaves stat subsystem trace uevent /dev (filtered): alarm ashmem autofs binder block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dri dvd dvdrw ecryptfs fb0 fd full fuse fw0 hidraw0 hidraw1 hpet input kmsg kvm log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda5 sg0 sg1 shm snapshot snd sr0 stderr stdin stdout uinput urandom usb vga_arbiter vhost-net zero ls /dev/mapper: control =================== df -Th: Filesystem Type Size Used Avail Use% Mounted on /cow overlayfs 3.9G 100M 3.8G 3% / udev devtmpfs 3.9G 12K 3.9G 1% /dev tmpfs tmpfs 1.6G 864K 1.6G 1% /run /dev/sr0 iso9660 763M 763M 0 100% /cdrom /dev/loop0 squashfs 717M 717M 0 100% /rofs tmpfs tmpfs 3.9G 32K 3.9G 1% /tmp none tmpfs 5.0M 4.0K 5.0M 1% /run/lock none tmpfs 3.9G 176K 3.9G 1% /run/shm none tmpfs 100M 52K 100M 1% /run/user /dev/sda1 ext4 910G 26G 838G 3% /mnt/boot-sav/sda1 =================== fdisk -l: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000da1e9 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1936809983 968403968 83 Linux /dev/sda2 1936812030 1953523711 8355841 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 1936812032 1953523711 8355840 82 Linux swap / Solaris Partition outside the disk detected. =================== Recommended repair Recommended-Repair This setting will reinstall the grub2 of sda1 into the MBR of sda. Additional repair will be performed: unhide-bootmenu-10s Unhide GRUB boot menu in sda1/etc/default/grub grub-install (GRUB) 2.00-7ubuntu11,grub-install (GRUB) 2. Reinstall the GRUB of sda1 into the MBR of sda Installation finished. No error reported. grub-install /dev/sda: exit code of grub-install /dev/sda:0 chroot /mnt/boot-sav/sda1 update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.5.0-19-generic Found initrd image: /boot/initrd.img-3.5.0-19-generic Found linux image: /boot/vmlinuz-3.5.0-17-generic Found initrd image: /boot/initrd.img-3.5.0-17-generic Found memtest86+ image: /boot/memtest86+.bin Unhide GRUB boot menu in sda1/boot/grub/grub.cfg Boot successfully repaired. You can now reboot your computer.

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 3)

    - by Geertjan
    To complete the picture, let's use the traditional (that is, old) Hibernate mechanism, i.e., via XML files, rather than via the annotations shown yesterday. It's definitely trickier, with many more places where typos can occur, but that's why it's the old mechanism. I do not recommend this approach. I recommend the approach shown yesterday. The other players in this scenario include PostgreSQL, as outlined in the previous blog entries in this series. Here's the structure of the module, replacing the code shown yesterday: Here's the Employee class, notice that it has no annotations: import java.io.Serializable; import java.util.Date; public class Employees implements Serializable {         private int employeeId;     private String firstName;     private String lastName;     private Date dateOfBirth;     private String phoneNumber;     private String junk;     public int getEmployeeId() {         return employeeId;     }     public void setEmployeeId(int employeeId) {         this.employeeId = employeeId;     }     public String getFirstName() {         return firstName;     }     public void setFirstName(String firstName) {         this.firstName = firstName;     }     public String getLastName() {         return lastName;     }     public void setLastName(String lastName) {         this.lastName = lastName;     }     public Date getDateOfBirth() {         return dateOfBirth;     }     public void setDateOfBirth(Date dateOfBirth) {         this.dateOfBirth = dateOfBirth;     }     public String getPhoneNumber() {         return phoneNumber;     }     public void setPhoneNumber(String phoneNumber) {         this.phoneNumber = phoneNumber;     }     public String getJunk() {         return junk;     }     public void setJunk(String junk) {         this.junk = junk;     } } And here's the Hibernate configuration file: <?xml version="1.0"?> <!DOCTYPE hibernate-configuration PUBLIC       "-//Hibernate/Hibernate Configuration DTD 3.0//EN"     "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration>     <session-factory>         <property name="hibernate.connection.driver_class">org.postgresql.Driver</property>         <property name="hibernate.connection.url">jdbc:postgresql://localhost:5432/smithdb</property>         <property name="hibernate.connection.username">smith</property>         <property name="hibernate.connection.password">smith</property>         <property name="hibernate.connection.pool_size">1</property>         <property name="hibernate.default_schema">public"</property>         <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>         <property name="hibernate.current_session_context_class">thread</property>         <property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property>         <property name="hibernate.show_sql">true</property>         <mapping resource="org/db/viewer/employees.hbm.xml"/>     </session-factory> </hibernate-configuration> Next, the Hibernate mapping file: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC       "-//Hibernate/Hibernate Mapping DTD 3.0//EN"       "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping>     <class name="org.db.viewer.Employees"            table="employees"            schema="public"            catalog="smithdb">         <id name="employeeId" column="employee_id" type="int">             <generator class="increment"/>         </id>         <property name="firstName" column="first_name" type="string" />         <property name="lastName" column="last_name" type="string" />         <property name="dateOfBirth" column="date_of_birth" type="date" />         <property name="phoneNumber" column="phone_number" type="string" />         <property name="junk" column="junk" type="string" />             </class>     </hibernate-mapping> Then, the HibernateUtil file, for providing access to the Hibernate SessionFactory: import java.net.URL; import org.hibernate.cfg.AnnotationConfiguration; import org.hibernate.SessionFactory; public class HibernateUtil {     private static final SessionFactory sessionFactory;         static {         try {             // Create the SessionFactory from standard (hibernate.cfg.xml)             // config file.             String res = "org/db/viewer/employees.cfg.xml";             URL myURL = Thread.currentThread().getContextClassLoader().getResource(res);             sessionFactory = new AnnotationConfiguration().configure(myURL).buildSessionFactory();         } catch (Throwable ex) {             // Log the exception.             System.err.println("Initial SessionFactory creation failed." + ex);             throw new ExceptionInInitializerError(ex);         }     }         public static SessionFactory getSessionFactory() {         return sessionFactory;     }     } Finally, the "createKeys" in the ChildFactory: @Override protected boolean createKeys(List list) {     Session session = HibernateUtil.getSessionFactory().getCurrentSession();     Transaction transac = null;     try {         transac = session.beginTransaction();         Query query = session.createQuery("from Employees");         list.addAll(query.list());     } catch (HibernateException he) {         Exceptions.printStackTrace(he);         if (transac != null){             transac.rollback();         }     } finally {         session.close();     }     return true; } Note that Constantine Drabo has a similar article here. Run the application and the result should be the same as yesterday.

    Read the article

  • Public domain usage of imagery from films? [on hold]

    - by AdamJones
    I'm thinking of starting a small film site, which would begin as a simple blog. Imagery from films I discuss on the site would be vital to the look and feel of this site. Instantly though this makes me wonder about copyright/public domain rights for such imagery. I just wondered if anyone had general or specific advise about using imagery from this industry or another similar situation? On the one hand I know the film industry aggressively tries to protect its IP (fair enough), but on the other hand, surely film companies do release some imagery of their films in stills format into the public domain to simply help their distribution and advertising efforts? I have tried looking on stock photo galleries for images of film stills but only found moviestillsdb.com) which seemed very limited in its results. I've researched a bit about fair usage (http://fairuse.stanford.edu/overview/fair-use/) as well, which I know applies to the USA specifically. This seems to suggest that a still of a film is within these bounds. Still, any constructive advise others may have as a result of experience dealing with imagery, from film or another domain would be greatly appreciated, assuming it isn't "get a lawyer".

    Read the article

  • Database Table Prefixes

    - by DoctorMick
    We're having a few discussions at work around the naming of our database tables. We're working on a large application with approx 100 database tables (ok, so it isn't that large), most of which can be categorized in to different functional area, and we're trying to work out the best way of naming/organizing these within an Oracle database. The three current options are: Create the different functional areas in separate schemas. Create everything in the same schema but prefix the tables with the functional area Create everything in the same schema with no prefixes We have various pro's and con's around each one but I'd be interested to hear everyone's opinions on what the best solution is.

    Read the article

  • The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going

    - by janice.heiss(at)oracle.com
    An informative article, based on a 2010 JavaOne (San Francisco, California) panel session, surveys a variety of expert perspectives on Java EE 6.The panel, moderated by Oracle's Alexis Moussine-Pouchkine, consisted of:* Adam Bien, Consultant Author/ Speaker, adam-bien.com* Emmanuel Bernard, Principal Software Engineer, JBoss by Red Hat,* David Blevins, Senior Software Engineer, and co-founder of the OpenEJB project and a     founder of Apache Geronimo* Roberto Chinnici, Technical Staff Consulting Member, Oracle* Jim Knutson, Java EE Architect, IBM* Reza Rahman, Lead Engineer, Caucho Technology, Inc.,* Krasimir Semerdzhiev, Development Architect, SAP Labs BulgariaThe panel addressed such topics as Platform and API Adoption, Contexts and Dependency Injection (CDI), Java EE vs. Spring, the impact of Java EE 6 on tooling and testing, Java EE.next, along with a variety of audience questions. Read the entire article for the whole picture.

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • ruby on rails configuration

    - by Themasterhimself
    Im using the following guide for getting started with rails for ubuntu 9.10. http://guides.rails.info/getting_started.html I have installed both ruby and gem. gokul@gokul-laptop:~$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux] gokul@gokul-laptop:~$ gem -v 1.3.6 gokul@gokul-laptop:~$ For rails, gokul@gokul-laptop:~$sudo gem install rails doesnt seem to give any response. so used the synaptic package manager for installing it. And it seems to have installed correctly. gokul@gokul-laptop:~$ rails Usage: /usr/bin/rails /path/to/your/app [options] Options: -r, --ruby=path Path to the Ruby binary of your choice (otherwise scripts use env, dispatchers current path). Default: /usr/bin/ruby1.8 -d, --database=name Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite2/sqlite3/frontbase/ibm_db). Default: sqlite3 -D, --with-dispatchers Add CGI/FastCGI/mod_ruby dispatches code to generated application skeleton Default: false --freeze Freeze Rails in vendor/rails from the gems generating the skeleton Default: false -m, --template=path Use an application template that lives at path (can be a filesystem path or URL). Default: (none) Rails Info: -v, --version Show the Rails version number and quit. -h, --help Show this help message and quit. General Options: -p, --pretend Run but do not make any changes. -f, --force Overwrite files that already exist. -s, --skip Skip files that already exist. -q, --quiet Suppress normal output. -t, --backtrace Debugging: show backtrace on errors. -c, --svn Modify files with subversion. (Note: svn must be in path) -g, --git Modify files with git. (Note: git must be in path) Description: The 'rails' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going. gokul@gokul-laptop:~$ app folder is created with all the proper folders. The problem starts with the following commands... gokul@gokul-laptop:~$ sudo gem install bundler [sudo] password for gokul: Successfully installed bundler-0.9.24 1 gem installed Installing ri documentation for bundler-0.9.24... Installing RDoc documentation for bundler-0.9.24... gokul@gokul-laptop:~$ bundle install Could not locate Gemfile gokul@gokul-laptop:~$ coming to the database, the default sqlite3 seems to have installed correctly. gokul@gokul-laptop:~$ sqlite3 SQLite version 3.6.16 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite The welcome aboard page is not being able to be found at (http://localhost:3000) after executing the following commands... gokul@gokul-laptop:~/Desktop$ rails blog create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop$ cd blog gokul@gokul-laptop:~/Desktop/blog$ rake db:create (in /home/gokul/Desktop/blog) gokul@gokul-laptop:~/Desktop/blog$ rails server create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop/blog$ hope some one can help me with this...

    Read the article

  • The Latest Dish

    - by Oracle Staff
    Black Eyed Peas to Headline at Appreciation Event If you're coming to OpenWorld to fill up on the latest in IT solutions, be sure to save room for dessert. At the Oracle OpenWorld Appreciation Event, you'll be savoring the music of the world's hottest funk pop band, Black Eyed Peas, plus superstar rock legends Don Henley, of the Eagles, and Steve Miller. Save the date now: When: Wednesday, September 22, 8 p.m-12 a.m. Where: Treasure Island, San Francisco OpenWorld's annual thank-you event will be our most spectacular yet. Treasure Island, in the center of scenic San Francisco Bay, will once again serve as a rockin' oasis for Oracle customers and partners as they groove to the beat and enjoy delicious food, drinks, and festivities. Get all the details here.

    Read the article

  • John Burke's Weclome to the Applications Strategy Blog

    - by Tony Ouk
    Hi I'm John Burke and I'm the group Vice President of Oracle's Applications Business Unit.  Thanks for stopping by our Applications blog today.  The purpose of this site is to provide you, our customers, with timely, relevant, and balanced information about the state of the applications business, both here at Oracle and industry-wide. So on this site, you'll find information about Oracle's application products, how our customers have used those products to transform their businesses, and general industry trends which might help you craft YOUR applications roadmap.  So right now I'm walking to meet with one of Oracle's development executives.  I also plan to talk to Oracle customers and leading industry analysts.  I plan to provide a complete and balanced view of the total applications landscape.  I hope you check back often and view our updates.

    Read the article

  • BPM best practice by David Read and Niall Commiskey

    - by JuergenKress
    At our SOA Community Workspace (SOA Community membership required) you can find best practice documents for BPM Implementations. Please make sure that your BPM experts and architects read this documents if you start or work on a BPM project. The material was created based on the experience with large BPM implementations: 11g-Runtime-Overview-v1.pptx Advanced-BPM-Session1-v2.pptx Error-Handling-v4.pptx BPM-MessageRecovery-Final.doc Also we can support you with your BPM project on-side. Please contact us if you need BPM support! SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: BPM,Niall Commiskey,David Read,BPM best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Systemupdate Nr. 10 für das FY12 – 25. Oktober 2011

    - by swalker
    Server-, Speicher- und Systemsoftware von Sun Änderungen der Preislisten: Weltweite Preisänderungen für Oracle Systemhardware und -software Weltweite Preisänderungen für Hardware und Software von Oracle Legacy-Systemen Was müssen Sie tun? Die System-Updates beinhalten Preisänderungen bei Systemkomponenten und bestehen aus zwei Dateien: Bekanntgabe der Änderungen (PDF) und Detailinformationen (XLS). Mithilfe der angegebenen ID-Nummern finden Sie im Arbeitsblatt die Teilenummern der Komponenten, die von einer Änderung betroffen sind. Weitere Informationen Besuchen Sie regelmäßig auf dem OPN-Portal die Seite für die Systempreisgestaltung. Dort finden Sie neben weiteren Informationen über diese Aktualisierungen auch die aktuelle Systempreisliste und Ressourcen. * Hinweis: Die Preisinformationen zu Oracle Systemen sind vertrauliche Informationen von Oracle und werden Ihnen gemäß der Oracle PartnerNetwork-Vereinbarung zur Verfügung gestellt.

    Read the article

  • OracleXETNSListener won't start

    - by David
    For some unknown reason OracleXETNSListener won't start which prevents SQL Developer from connecting to it. C:\oraclexe\app\oracle\product\10.2.0\server\NETWORK\ADMIN\listener.ora and C:\oraclexe\app\oracle\product\10.2.0\server\NETWORK\ADMIN\tnsnames both have correct hostname. C:\oraclexe\app\oracle\product\10.2.0\server\NETWORK\log\sqlnet has the following error: *********************************************************************** Fatal NI connect error 12641, connecting to: (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 09-MAY-2010 18:08:29 Tracing not turned on. Tns error struct: ns main err code: 12641 TNS-12641: Authentication service failed to initialize ns secondary err code: 0 nt main err code: 0 nt secondary err code: 0 nt OS err code: 0 The only thing I can think of is that I may have changed my workgroup recently, but I don't know why that would matter. Any ideas would be greatly appreciated.

    Read the article

  • Slides from Upgrade webcast

    - by Alex Blyth
    Thanks everyone for attending the webcast on "Upgrading to Oracle 11g". I hope there were some useful tips for everyone. My apologies for the issue with the audio streaming - Ill re-record the session later this week and hopefully have it available soon there after. As I mentioned, the next session - on Oracle VM and Oracle Enterprise Linux is on April 28 2010.Please click here to enroll. As for the slides... here they are: You can download the slides here. Upgrade to Oracle 11g View more presentations from Oracle Australia. Thanks again Cheers Alex

    Read the article

  • Standard Apache (not OHS) with mod_osso for Single Signon

    - by Markos Fragkakis
    The mod_osso.so (the Apache plugin for Single Signon, provided by Oracle) is distributed with the Oracle HTTP Server (OHS), which is essentially a modified Apache. I am trying to use it on the standard Apache HTTP Server, and have not managed to get it to work. Configuration: Apache 2.2.15 OHS from the Oracle Web Tier Tools 11.1.1.2.0 Red Hat Linux 64 bit I have: Included the module in the modules directory (copied from corresponding modules dir in OHS) Included the libraries libiau.so and libclutsh.so.11.1 from Oracle Home. The absence of these libraries produced an error on starting Apache. Produced a osso.conf using the ssoreg.sh tool provided with OID (the LDAP implementation of Oracle) Created the required mod_osso.conf file, which I included in httpd.conf. The error I get when starting Apache is this: # /opt/apache_sso/bin/apachectl -k start httpd: Syntax error on line 1075 of /opt/apache_sso/conf/httpd.conf: Syntax error on line 1 of /opt/apache_sso/conf/mod_osso.conf: Cannot load /opt/apache_sso/modules/mod_osso.so into server: /opt/apache_sso/modules/mod_osso.so: undefined symbol: _audit_authentication_request My mod_osso.conf: # cat /opt/apache_sso/conf/mod_osso.conf LoadModule osso_module modules/mod_osso.so <IfModule mod_osso.c> OssoIdleTimeout off OssoIpCheck on OssoConfigFile conf/osso.conf #Location is the URI you want to protect <Location /myapp> require valid-user #OHS 11g AuthType Osso #OHS 10g AuthType Basic AuthType Osso </Location> </IfModule> Has anyone made mod_osso work on standard Apache HTTP server?

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >