Search Results

Search found 10481 results on 420 pages for 'identity insert'.

Page 333/420 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Modifying the SL/WIF Integration Bits to support Issued Token Credentials

    - by Your DisplayName here!
    The SL/WIF integration code that ships with the Identity Training Kit only supports Windows and UserName credentials to request tokens from an STS. This is fine for simple single STS scenarios (like a single IdP). But the more common pattern for claims/token based systems is to split the STS roles into an IdP and a Resource STS (or whatever you wanna call it). In this case, the 2nd leg requires to present the issued token from the 1st leg – this is not directly supported by the bits. But they can be easily modified to accomplish this. The Credential Fist we need a class that represents an issued token credential. Here we store the RSTR that got returned from the client to IdP request: public class IssuedTokenCredentials : IRequestCredentials {     public string IssuedToken { get; set; }     public RequestSecurityTokenResponse RSTR { get; set; }     public IssuedTokenCredentials(RequestSecurityTokenResponse rstr)     {         RSTR = rstr;         IssuedToken = rstr.RequestedSecurityToken.RawToken;     } } The Binding Next we need a binding to be used with issued token credential requests. This assumes you have an STS endpoint for mixed mode security with SecureConversation turned off. public class WSTrustBindingIssuedTokenMixed : WSTrustBinding {     public WSTrustBindingIssuedTokenMixed()     {         this.Elements.Add( new HttpsTransportBindingElement() );     } } WSTrustClient The last step is to make some modifications to WSTrustClient to make it issued token aware. In the constructor you have to check for the credential type, and if it is an issued token, store it away. private RequestSecurityTokenResponse _rstr; public WSTrustClient( Binding binding, EndpointAddress remoteAddress, IRequestCredentials credentials )     : base( binding, remoteAddress ) {     if ( null == credentials )     {         throw new ArgumentNullException( "credentials" );     }     if (credentials is UsernameCredentials)     {         UsernameCredentials usernname = credentials as UsernameCredentials;         base.ChannelFactory.Credentials.UserName.UserName = usernname.Username;         base.ChannelFactory.Credentials.UserName.Password = usernname.Password;     }     else if (credentials is IssuedTokenCredentials)     {         var issuedToken = credentials as IssuedTokenCredentials;         _rstr = issuedToken.RSTR;     }     else if (credentials is WindowsCredentials)     { }     else     {         throw new ArgumentOutOfRangeException("credentials", "type was not expected");     } } Next – when WSTrustClient constructs the RST message to the STS, the issued token header must be embedded when needed: private Message BuildRequestAsMessage( RequestSecurityToken request ) {     var message = Message.CreateMessage( base.Endpoint.Binding.MessageVersion ?? MessageVersion.Default,       IssueAction,       (BodyWriter) new WSTrustRequestBodyWriter( request ) );     if (_rstr != null)     {         message.Headers.Add(new IssuedTokenHeader(_rstr));     }     return message; } HTH

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Understanding implementation of glu.PickMatrix()

    - by stoney78us
    I am working on an OpenGL project which requires object selection feature. I use OpenTK framework to do this; however OpenTK doesn't support glu.PickMatrix() method to define the picking region. I ended up googling its implementation and here is what i got: void GluPickMatrix(double x, double y, double deltax, double deltay, int[] viewport) { if (deltax <= 0 || deltay <= 0) { return; } GL.Translate((viewport[2] - 2 * (x - viewport[0])) / deltax, (viewport[3] - 2 * (y - viewport[1])) / deltay, 0); GL.Scale(viewport[2] / deltax, viewport[3] / deltay, 1.0); } I totally fail to understand this piece of code. Moreover, this doesn't work with my following code sample: //selectbuffer private int[] _selectBuffer = new int[512]; private void Init() { float[] triangleVertices = new float[] { 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f }; float[] _triangleColors = new float[] { 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f }; GL.GenBuffers(2, _vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleVertices.Length), _triangleVertices, BufferUsageHint.StaticDraw); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[1]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleColors.Length), _triangleColors, BufferUsageHint.StaticDraw); GL.ColorPointer(3, ColorPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.ColorArray); //Selectbuffer set up GL.SelectBuffer(512, _selectBuffer); } private void glControlWindow_Paint(object sender, PaintEventArgs e) { GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); float[] eyes = { 0.0f, 0.0f, -10.0f }; float[] target = { 0.0f, 0.0f, 0.0f }; Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); //45 degree = 0.785398163 rads Matrix4 view = Matrix4.LookAt(eyes[0], eyes[1], eyes[2], target[0], target[1], target[2], 0, 1, 0); Matrix4 model = Matrix4.Identity; Matrix4 MV = view * model; //First Clear Buffers GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); GL.LoadMatrix(ref MV); GL.Viewport(0, 0, glControlWindow.Width, glControlWindow.Height); GL.Enable(EnableCap.DepthTest); //Enable correct Z Drawings GL.DepthFunc(DepthFunction.Less); //Enable correct Z Drawings GL.MatrixMode(MatrixMode.Modelview); GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); //Finally... GraphicsContext.CurrentContext.VSync = true; //Caps frame rate as to not over run GPU glControlWindow.SwapBuffers(); //Takes from the 'GL' and puts into control } private void DrawTriangle() { GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.DrawArrays(BeginMode.Triangles, 0, 3); GL.DisableClientState(ArrayCap.VertexArray); } //mouse click event implementation private void glControlWindow_MouseClick(object sender, System.Windows.Forms.MouseEventArgs e) { //Enter Select mode. Pretend drawing. GL.RenderMode(RenderingMode.Select); int[] viewport = new int[4]; GL.GetInteger(GetPName.Viewport, viewport); GL.PushMatrix(); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GluPickMatrix(e.X, e.Y, 5, 5, viewport); Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); // this projection matrix is the same as one in glControlWindow_Paint method. GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); int i = 0; int hits; GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); i++; GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); hits = GL.RenderMode(RenderingMode.Render); .....hits processing code goes here... GL.PopMatrix(); glControlWindow.Invalidate(); } I expect to get only one hit everytime i click inside a triangle, but i always get 2 no matter where i click. I suspect there is something wrong with the implementation of the GluPickMatrix, I haven't figured out yet.

    Read the article

  • Local LINQtoSQL Database For Your Windows Phone 7 Application

    - by Tim Murphy
    There aren’t many applications that are of value without having some for of data store.  In Windows Phone development we have a few options.  You can store text directly to isolated storage.  You can also use a number of third party libraries to create or mimic databases in isolated storage.  With Mango we gained the ability to have a native .NET database approach which uses LINQ to SQL.  In this article I will try to bring together the components needed to implement this last type of data store and fill in some of the blanks that I think other articles have left out. Defining A Database The first things you are going to need to do is define classes that represent your tables and a data context class that is used as the overall database definition.  The table class consists of column definitions as you would expect.  They can have relationships and constraints as with any relational DBMS.  Below is an example of a table definition. First you will need to add some assembly references to the code file. using System.ComponentModel;using System.Data.Linq;using System.Data.Linq.Mapping; You can then add the table class and its associated columns.  It needs to implement INotifyPropertyChanged and INotifyPropertyChanging.  Each level of the class needs to be decorated with the attribute appropriate for that part of the definition.  Where the class represents the table the properties represent the columns.  In this example you will see that the column is marked as a primary key and not nullable with a an auto generated value. You will also notice that the in the column property’s set method It uses the NotifyPropertyChanging and NotifyPropertyChanged methods in order to make sure that the proper events are fired. [Table]public class MyTable: INotifyPropertyChanged, INotifyPropertyChanging{ public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(string propertyName) { if(PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangingEventHandler PropertyChanging; private void NotifyPropertyChanging(string propertyName) { if(PropertyChanging != null) { PropertyChanging(this, new PropertyChangingEventArgs(propertyName)); } } private int _TableKey; [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "INT NOT NULL Identity", CanBeNull = false, AutoSync = AutoSync.OnInsert)] public int TableKey { get { return _TableKey; } set { NotifyPropertyChanging("TableKey"); _TableKey = value; NotifyPropertyChanged("TableKey"); } } The last part of the database definition that needs to be created is the data context.  This is a simple class that takes an isolated storage location connection string its constructor and then instantiates tables as public properties. public class MyDataContext: DataContext{ public MyDataContext(string connectionString): base(connectionString) { MyRecords = this.GetTable<MyTable>(); } public Table<MyTable> MyRecords;} Creating A New Database Instance Now that we have a database definition it is time to create an instance of the data context within our Windows Phone app.  When your app fires up it should check if the database already exists and create an instance if it does not.  I would suggest that this be part of the constructor of your ViewModel. db = new MyDataContext(connectionString);if(!db.DatabaseExists()){ db.CreateDatabase();} The next thing you have to know is how the connection string for isolated storage should be constructed.  The main sticking point I have found is that the database cannot be created unless the file mode is read/write.  You may have different connection strings but the initial one needs to be similar to the following. string connString = "Data Source = 'isostore:/MyApp.sdf'; File Mode = read write"; Using you database Now that you have done all the up front work it is time to put the database to use.  To make your life a little easier and keep proper separation between your view and your viewmodel you should add a couple of methods to the viewmodel.  These will do the CRUD work of your application.  What you will notice is that the SubmitChanges method is the secret sauce in all of the methods that change data. private myDataContext myDb;private ObservableCollection<MyTable> _viewRecords;public ObservableCollection<MyTable> ViewRecords{ get { return _viewRecords; } set { _viewRecords = value; NotifyPropertyChanged("ViewRecords"); }}public void LoadMedstarDbData(){ var tempItems = from MyTable myRecord in myDb.LocalScans select myRecord; ViewRecords = new ObservableCollection<MyTable>(tempItems);}public void SaveChangesToDb(){ myDb.SubmitChanges();}public void AddMyTableItem(MyTable newScan){ myDb.LocalScans.InsertOnSubmit(newScan); myDb.SubmitChanges();}public void DeleteMyTableItem(MyTable newScan){ myDb.LocalScans.DeleteOnSubmit(newScan); myDb.SubmitChanges();} Updating existing database What happens when you need to change the structure of your database?  Unfortunately you have to add code to your application that checks the version of the database which over time will create some pollution in your codes base.  On the other hand it does give you control of the update.  In this example you will see the DatabaseSchemaUpdater in action.  Assuming we added a “Notes” field to the MyTable structure, the following code will check if the database is the latest version and add the field if it isn’t. if(!myDb.DatabaseExists()){ myDb.CreateDatabase();}else{ DatabaseSchemaUpdater dbUdater = myDb.CreateDatabaseSchemaUpdater(); if(dbUdater.DatabaseSchemaVersion < 2) { dbUdater.AddColumn<MyTable>("Notes"); dbUdater.DatabaseSchemaVersion = 2; dbUdater.Execute(); }} Summary This approach does take a fairly large amount of work, but I think the end product is robust and very native for .NET developers.  It turns out to be worth the investment. del.icio.us Tags: Windows Phone,Windows Phone 7,LINQ to SQL,LINQ,Database,Isolated Storage

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • Silverlight and WCF caching

    - by subodhnpushpak
    There are scenarios where Silverlight client calls WCF (or REST) service for data. Now, if the data is cached on the WCF layer, the calls can take considerable resources at the server if NOT cached. Keeping that in mind along with the fact that cache is an cross-cutting aspect, and therefore it should be as easy as possible to put Cache wherever required. The good thing about the solution is that it caches based on the inputs. The input can be basic type of any complex type. If input changes the data is fetched and then cached for further used. If same input is provided again, data id fetched from the cache. The cache logic itself is implemented as PostSharp aspect, and it is as easy as putting an attribute over service call to switch on cache. Notice how clean the code is:        [OperationContract]       [CacheOnArgs(typeof(int))] // based on actual value of cache        public string DoWork(int value)        {            return string.Format("You entered: {0} @ cached time {1}", value, System.DateTime.Now);        } The cache is implemented as POST Sharp as below 1: public override void OnInvocation(MethodInvocationEventArgs eventArgs) 2: { 3: try 4: { 5: object value = new object(); 6: object[] args = eventArgs.GetArgumentArray(); 7: if (args != null || args.Count() > 0) 8: { 9:   10: string key = string.Format("{0}_{1}", eventArgs.Method.Name, XMLUtility<object>.GetDataContractXml(args[0], null));// Compute the cache key (details omitted). 11:   12: 13: value = GetFromCache(key); 14: if (value == null) 15: { 16: eventArgs.Proceed(); 17: value = XMLUtility<object>.GetDataContractXml(eventArgs.ReturnValue, null); 18: value = eventArgs.ReturnValue; 19: AddToCache(key, value); 20: return; 21: } 22:   23:   24: Log(string.Format("Data returned from Cache {0}",value)); 25: eventArgs.ReturnValue = value; 26: } 27: } 28: catch (Exception ex) 29: { 30: //ApplicationLogger.LogException(ex.Message, Source.UtilityService); 31: } 32: } 33:   34: private object GetFromCache(string inputKey) { if (ServerConfig.CachingEnabled) { return WCFCache.Current[inputKey]; } return null; }private void AddToCache(string inputKey,object outputValue) 35: { 36: if (ServerConfig.CachingEnabled) 37: { 38: if (WCFCache.Current.CachedItemsNumber < ServerConfig.NumberOfCachedItems) 39: { 40: if (ServerConfig.SlidingExpirationTime <= 0 || ServerConfig.SlidingExpirationTime == int.MaxValue) 41: { 42: WCFCache.Current[inputKey] = outputValue; 43: } 44: else 45: { 46: WCFCache.Current.Insert(inputKey, outputValue, new TimeSpan(0, 0, ServerConfig.SlidingExpirationTime), true); 47:   48: // _bw.DoWork += bw_DoWork; 49: //string arg = string.Format("{0}|{1}", inputKey,outputValue); 50: //_bw.RunWorkerAsync(inputKey ); 51: } 52: } 53: } 54: }     The cache class can be extended to support Velocity / memcahe / Nache. the attribute can be used over REST services as well. Hope the above helps. Here is the code base for the same.   Please do provide your inputs / comments.

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

  • What are the best practices to use NHiberante sessions in asp.net (mvc/web api) ?

    - by mrt181
    I have the following setup in my project: public class WebApiApplication : System.Web.HttpApplication { public static ISessionFactory SessionFactory { get; private set; } public WebApiApplication() { this.BeginRequest += delegate { var session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(session); }; this.EndRequest += delegate { var session = SessionFactory.GetCurrentSession(); if (session == null) { return; } session = CurrentSessionContext.Unbind(SessionFactory); session.Dispose(); }; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); var assembly = Assembly.GetCallingAssembly(); SessionFactory = new NHibernateHelper(assembly, Server.MapPath("/")).SessionFactory; } } public class PositionsController : ApiController { private readonly ISession session; public PositionsController() { this.session = WebApiApplication.SessionFactory.GetCurrentSession(); } public IEnumerable<Position> Get() { var result = this.session.Query<Position>().Cacheable().ToList(); if (!result.Any()) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return result; } public HttpResponseMessage Post(PositionDataTransfer dto) { //TODO: Map dto to model IEnumerable<Position> positions = null; using (var transaction = this.session.BeginTransaction()) { this.session.SaveOrUpdate(positions); try { transaction.Commit(); } catch (StaleObjectStateException) { if (transaction != null && transaction.IsActive) { transaction.Rollback(); } } } var response = this.Request.CreateResponse(HttpStatusCode.Created, dto); response.Headers.Location = new Uri(this.Request.RequestUri.AbsoluteUri + "/" + dto.Name); return response; } public void Put(int id, string value) { //TODO: Implement PUT throw new NotImplementedException(); } public void Delete(int id) { //TODO: Implement DELETE throw new NotImplementedException(); } } I am not sure if this is the recommended way to insert the session into the controller. I was thinking about using DI but i am not sure how to inject the session that is opened and binded in the BeginRequest delegate into the Controllers constructor to get this public PositionsController(ISession session) { this.session = session; } Question: What is the recommended way to use NHiberante sessions in asp.net mvc/web api ?

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • Delphi Editbox causing unexplainable errors...

    - by NeoNMD
    On a form I have 8 edit boxes that I do the exact same thing with. They are arranged in 2 sets of 4, one is Upper and the other is Lower. I kept getting errors clearing all the edit boxes so I went through clearing them 1 by 1 and found that 1 of the edit boxes just didnt work and when I tried to run the program and change that exit box it caused the debugger to jump to a point in the code with the database (even though the edit boxes had nothing to do with the database and they arent in a procedure or stack with a database in it) and say the program has access violation there. So I then removed all mention of that edit box and the code worked perfectly again, so I deleted that edit box, copied and pasted another edit box and left all values the same, then went through my code and copied the code from the other sections and simply renamed it for the new Edit box and it STILL causes an error even though it is entirely new. I cannot figure it out so I ask you all, what the hell? The editbox in question is "Edit1" unit DefinitionCoreV2; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, SQLiteTable3, StdCtrls; type TDefinitionFrm = class(TForm) GrpCompetition: TGroupBox; CmbCompSele: TComboBox; BtnCompetitionAdd: TButton; BtnCompetitionUpdate: TButton; BtnCompetitionRevert: TButton; GrpCompetitionDetails: TGroupBox; LblCompetitionIDTitle: TLabel; EdtCompID: TEdit; LblCompetitionDescriptionTitle: TLabel; EdtCompDesc: TEdit; LblCompetitionNotesTitle: TLabel; EdtCompNote: TEdit; LblCompetitionLocationTitle: TLabel; EdtCompLoca: TEdit; BtnCompetitionDelete: TButton; GrpSection: TGroupBox; LblSectionID: TLabel; LblGender: TLabel; LblAge: TLabel; LblLevel: TLabel; LblWeight: TLabel; LblType: TLabel; LblHeight: TLabel; LblCompetitionID: TLabel; BtnSectionAdd: TButton; EdtSectionID: TEdit; CmbGender: TComboBox; BtnSectionUpdate: TButton; BtnSectionRevert: TButton; CmbAgeRange: TComboBox; CmbLevelRange: TComboBox; CmbType: TComboBox; CmbWeightRange: TComboBox; CmbHeightRange: TComboBox; EdtSectCompetitionID: TEdit; BtnSectionDelete: TButton; GrpSectionDetails: TGroupBox; EdtLowerAge: TEdit; EdtLowerWeight: TEdit; EdtLowerHeight: TEdit; EdtUpperAge: TEdit; EdtUpperLevel: TEdit; EdtUpperWeight: TEdit; EdtUpperHeight: TEdit; LblAgeRule: TLabel; LblLevelRule: TLabel; LblWeightRule: TLabel; LblHeightRule: TLabel; LblCompetitionSelect: TLabel; LblSectionSelect: TLabel; CmbSectSele: TComboBox; Edit1: TEdit; procedure FormCreate(Sender: TObject); procedure BtnCompetitionAddClick(Sender: TObject); procedure CmbCompSeleChange(Sender: TObject); procedure BtnCompetitionUpdateClick(Sender: TObject); procedure BtnCompetitionRevertClick(Sender: TObject); procedure BtnCompetitionDeleteClick(Sender: TObject); procedure CmbSectSeleChange(Sender: TObject); procedure BtnSectionAddClick(Sender: TObject); procedure BtnSectionUpdateClick(Sender: TObject); procedure BtnSectionRevertClick(Sender: TObject); procedure BtnSectionDeleteClick(Sender: TObject); procedure CmbAgeRangeChange(Sender: TObject); procedure CmbLevelRangeChange(Sender: TObject); procedure CmbWeightRangeChange(Sender: TObject); procedure CmbHeightRangeChange(Sender: TObject); private procedure UpdateCmbCompSele; procedure AddComp; procedure RevertComp; procedure AddSect; procedure RevertSect; procedure UpdateCmbSectSele; procedure ClearSect; { Private declarations } public { Public declarations } end; var DefinitionFrm: TDefinitionFrm; implementation {$R *.dfm} procedure TDefinitionFrm.UpdateCmbCompSele; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; sCompTitle : string; bNext : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM CompetitionTable'); try CmbCompSele.Items.Clear; Repeat begin sCompTitle:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID'])+':'+sltb.FieldAsString(sltb.FieldIndex['Description']); CmbCompSele.Items.Add(sCompTitle); bNext := sltb.Next; end; Until sltb.EOF; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.UpdateCmbSectSele; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; sSQL : string; sSectTitle : string; bNext : boolean; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID = '+EdtCompID.text); If sltb.RowCount =0 then begin sltb := slDb.GetTable('SELECT * FROM SectionTable'); bLast:= sltb.MoveLast; sSQL := 'INSERT INTO SectionTable(SectionID,CompetitionID,Gender,Type) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['SectionID'])+1)+','+EdtCompID.text+',1,1)'; sldb.ExecSQL(sSQL); sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID = '+EdtCompID.text); end; try CmbSectSele.Items.Clear; Repeat begin sSectTitle:=sltb.FieldAsString(sltb.FieldIndex['SectionID'])+':'+sltb.FieldAsString(sltb.FieldIndex['Type'])+':'+sltb.FieldAsString(sltb.FieldIndex['Gender'])+':'+sltb.FieldAsString(sltb.FieldIndex['Age'])+':'+sltb.FieldAsString(sltb.FieldIndex['Level'])+':'+sltb.FieldAsString(sltb.FieldIndex['Weight'])+':'+sltb.FieldAsString(sltb.FieldIndex['Height']); CmbSectSele.Items.Add(sSectTitle); //CmbType.Items.Strings[sltb.FieldAsInteger(sltb.FieldIndex['Type'])] Works but has logic errors bNext := sltb.Next; end; Until sltb.EOF; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.AddComp; var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM CompetitionTable'); try bLast:= sltb.MoveLast; sSQL := 'INSERT INTO CompetitionTable(CompetitionID,Description) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['CompetitionID'])+1)+',"New Competition")'; sldb.ExecSQL(sSQL); finally sltb.Free; end; finally sldb.Free; end; UpdateCmbCompSele; end; procedure TDefinitionFrm.AddSect; var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM SectionTable'); try bLast:= sltb.MoveLast; sSQL := 'INSERT INTO SectionTable(SectionID,CompetitionID,Gender,Type) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['SectionID'])+1)+','+EdtCompID.text+',1,1)'; sldb.ExecSQL(sSQL); finally sltb.Free; end; finally sldb.Free; end; UpdateCmbSectSele; end; procedure TDefinitionFrm.RevertComp; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; iID : integer; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbCompSele.Text <> '' then begin iID := StrToInt(Copy(CmbCompSele.Text,0,Pos(':',CmbCompSele.Text)-1)); sltb := slDb.GetTable('SELECT * FROM CompetitionTable WHERE CompetitionID='+IntToStr(iID))//ItemIndex starts at 0, CompID at 1 end else sltb := slDb.GetTable('SELECT * FROM CompetitionTable WHERE CompetitionID=1'); try EdtCompID.Text:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID']); EdtCompLoca.Text:=sltb.FieldAsString(sltb.FieldIndex['Location']); EdtCompDesc.Text:=sltb.FieldAsString(sltb.FieldIndex['Description']); EdtCompNote.Text:=sltb.FieldAsString(sltb.FieldIndex['Notes']); finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.RevertSect; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; iID : integer; sTemp : string; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbCompSele.Text <> '' then begin iID := StrToInt(Copy(CmbSectSele.Text,0,Pos(':',CmbSectSele.Text)-1)); sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE SectionID='+IntToStr(iID));//ItemIndex starts at 0, CompID at 1 end else sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID='+EdtCompID.Text); try EdtSectionID.Text:=sltb.FieldAsString(sltb.FieldIndex['SectionID']); EdtSectCompetitionID.Text:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID']); Case sltb.FieldAsInteger(sltb.FieldIndex['Type']) of 1 : CmbType.ItemIndex:=0; 2 : CmbType.ItemIndex:=0; 3 : CmbType.ItemIndex:=1; 4 : CmbType.ItemIndex:=1; end; Case sltb.FieldAsInteger(sltb.FieldIndex['Gender']) of 1 : CmbGender.ItemIndex:=0; 2 : CmbGender.ItemIndex:=1; 3 : CmbGender.ItemIndex:=2; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Age']); if sTemp <> '' then begin //Decode end else begin LblAgeRule.Hide; EdtLowerAge.Text :=''; EdtLowerAge.Hide; EdtUpperAge.Text :=''; EdtUpperAge.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Level']); if sTemp <> '' then begin //Decode end else begin LblLevelRule.Hide; Edit1.Text :=''; Edit1.Hide; EdtUpperLevel.Text :=''; EdtUpperLevel.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Weight']); if sTemp <> '' then begin //Decode end else begin LblWeightRule.Hide; EdtLowerWeight.Text :=''; EdtLowerWeight.Hide; EdtUpperWeight.Text :=''; EdtUpperWeight.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Height']); if sTemp <> '' then begin //Decode end else begin LblHeightRule.Hide; EdtLowerHeight.Text :=''; EdtLowerHeight.Hide; EdtUpperHeight.Text :=''; EdtUpperHeight.Hide; end; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.BtnCompetitionAddClick(Sender: TObject); begin AddComp end; procedure TDefinitionFrm.ClearSect; begin CmbSectSele.Clear; EdtSectionID.Text:=''; EdtSectCompetitionID.Text:=''; CmbType.Clear; CmbGender.Clear; CmbAgeRange.Clear; EdtLowerAge.Text:=''; EdtUpperAge.Text:=''; CmbLevelRange.Clear; Edit1.Text:=''; EdtUpperLevel.Text:=''; CmbWeightRange.Clear; EdtLowerWeight.Text:=''; EdtUpperWeight.Text:=''; CmbHeightRange.Clear; EdtLowerHeight.Text:=''; EdtUpperHeight.Text:=''; end; procedure TDefinitionFrm.CmbCompSeleChange(Sender: TObject); begin If CmbCompSele.ItemIndex <> -1 then begin RevertComp; GrpSection.Enabled:=True; CmbSectSele.Clear; ClearSect; UpdateCmbSectSele; end; end; procedure TDefinitionFrm.BtnCompetitionUpdateClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sSQL:= 'UPDATE CompetitionTable SET Description="'+EdtCompDesc.Text+'",Location="'+EdtCompLoca.Text+'",Notes="'+EdtCompNote.Text+'" WHERE CompetitionID ="'+EdtCompID.Text+'";'; sldb.ExecSQL(sSQL); finally sldb.Free; end; end; procedure TDefinitionFrm.BtnCompetitionRevertClick(Sender: TObject); begin RevertComp; end; procedure TDefinitionFrm.BtnCompetitionDeleteClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; iID : integer; begin If CmbCompSele.Text <> '' then begin If (CmbCompSele.Text[1] ='1') and (CmbCompSele.Text[2] =':') then begin MessageDlg('Deleting the last record is a very bad idea :/',mtInformation,[mbOK],0); end else begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try iID := StrToInt(Copy(CmbCompSele.Text,0,Pos(':',CmbCompSele.Text)-1)); sSQL:= 'DELETE FROM SectionTable WHERE CompetitionID='+IntToStr(iID)+';'; sldb.ExecSQL(sSQL); sSQL:= 'DELETE FROM CompetitionTable WHERE CompetitionID='+IntToStr(iID)+';'; sldb.ExecSQL(sSQL); finally sldb.Free; end; CmbCompSele.ItemIndex:=0; UpdateCmbCompSele; RevertComp; CmbCompSele.Text:='Select Competition'; end; end; end; procedure TDefinitionFrm.FormCreate(Sender: TObject); begin UpdateCmbCompSele; end; procedure TDefinitionFrm.CmbSectSeleChange(Sender: TObject); begin RevertSect; end; procedure TDefinitionFrm.BtnSectionAddClick(Sender: TObject); begin AddSect; end; procedure TDefinitionFrm.BtnSectionUpdateClick(Sender: TObject); //change fields values var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; iTypeCode : integer; iGenderCode : integer; sAgeStr, sLevelStr, sWeightStr, sHeightStr : string; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbType.Text='Fighting' then iTypeCode := 1 else iTypeCode := 3; If CmbGender.Text='Male' then iGenderCode := 1 else if CmbGender.Text='Female' then iGenderCode := 2 else iGenderCode := 3; Case CmbAgeRange.ItemIndex of 0:sAgeStr := 'o-'+EdtLowerAge.Text; 1:sAgeStr := 'u-'+EdtLowerAge.Text; 2:sAgeStr := EdtLowerAge.Text+'-'+EdtUpperAge.Text; end; Case CmbLevelRange.ItemIndex of 0:sLevelStr := 'o-'+Edit1.Text; 1:sLevelStr := 'u-'+Edit1.Text; 2:sLevelStr := Edit1.Text+'-'+EdtUpperLevel.Text; end; Case CmbWeightRange.ItemIndex of 0:sWeightStr := 'o-'+EdtLowerWeight.Text; 1:sWeightStr := 'u-'+EdtLowerWeight.Text; 2:sWeightStr := EdtLowerWeight.Text+'-'+EdtUpperWeight.Text; end; Case CmbHeightRange.ItemIndex of 0:sHeightStr := 'o-'+EdtLowerHeight.Text; 1:sHeightStr := 'u-'+EdtLowerHeight.Text; 2:sHeightStr := EdtLowerHeight.Text+'-'+EdtUpperHeight.Text; end; sSQL:= 'UPDATE SectionTable SET Type="'+IntToStr(iTypeCode)+'",Gender="'+IntToStr(iGenderCode)+'" WHERE SectionID ="'+EdtSectionID.Text+'";'; sldb.ExecSQL(sSQL); finally sldb.Free; end; end; procedure TDefinitionFrm.BtnSectionRevertClick(Sender: TObject); begin RevertSect; end; procedure TDefinitionFrm.BtnSectionDeleteClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; begin If CmbSectSele.Text[1] ='1' then begin MessageDlg('Deleting the last record is a very bad idea :/',mtInformation,[mbOK],0); end else begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sSQL:= 'DELETE FROM SectionTable WHERE SectionID='+CmbSectSele.Text[1]+';'; sldb.ExecSQL(sSQL); finally sldb.Free; end; CmbSectSele.ItemIndex:=0; UpdateCmbSectSele; RevertSect; CmbSectSele.Text:='Select Competition'; end; end; procedure TDefinitionFrm.CmbAgeRangeChange(Sender: TObject); begin Case CmbAgeRange.ItemIndex of 0: begin EdtLowerAge.Show; LblAgeRule.Caption:='Over and including'; LblAgeRule.Show; EdtUpperAge.Hide; end; 1: begin EdtLowerAge.Show; LblAgeRule.Caption:='Under and including'; LblAgeRule.Show; EdtUpperAge.Hide; end; 2: begin EdtLowerAge.Show; LblAgeRule.Caption:='LblAgeRule'; LblAgeRule.Hide; EdtUpperAge.Show; end; end; end; procedure TDefinitionFrm.CmbLevelRangeChange(Sender: TObject); begin Case CmbLevelRange.ItemIndex of 0: begin Edit1.Show; LblLevelRule.Caption:='Over and including'; LblLevelRule.Show; EdtUpperLevel.Hide; end; 1: begin Edit1.Show; LblLevelRule.Caption:='Under and including'; LblLevelRule.Show; EdtUpperLevel.Hide; end; 2: begin Edit1.Show; LblLevelRule.Caption:='LblLevelRule'; LblLevelRule.Hide; EdtUpperLevel.Show; end; end; end; procedure TDefinitionFrm.CmbWeightRangeChange(Sender: TObject); begin Case CmbWeightRange.ItemIndex of 0: begin EdtLowerWeight.Show; LblWeightRule.Caption:='Over and including'; LblWeightRule.Show; EdtUpperWeight.Hide; end; 1: begin EdtLowerWeight.Show; LblWeightRule.Caption:='Under and including'; LblWeightRule.Show; EdtUpperWeight.Hide; end; 2: begin EdtLowerWeight.Show; LblWeightRule.Caption:='LblWeightRule'; LblWeightRule.Hide; EdtUpperWeight.Show; end; end; end; procedure TDefinitionFrm.CmbHeightRangeChange(Sender: TObject); begin Case CmbHeightRange.ItemIndex of 0: begin EdtLowerHeight.Show; LblHeightRule.Caption:='Over and including'; LblHeightRule.Show; EdtUpperHeight.Hide; end; 1: begin EdtLowerHeight.Show; LblHeightRule.Caption:='Under and including'; LblHeightRule.Show; EdtUpperHeight.Hide; end; 2: begin EdtLowerHeight.Show; LblHeightRule.Caption:='LblHeightRule'; LblHeightRule.Hide; EdtUpperHeight.Show; end; end; end; end.

    Read the article

  • SD Card reader not working on Sony Vaio

    - by TessellatingHeckler
    This laptop (Sony Vaio VGN-Z31MN/B PCG-6z2m) has been installed with Windows 7 64 bit, all the drivers from Sony's VAIO site are installed, and everything in Device Manager both (a) has a driver and (b) shows as working, no exclamation marks or warnings. "Hide empty drives" in Folder options is disabled so the card reader appears, but will not read the card ("please insert a disk in drive O:"). Previously, when the laptop had Windows XP on it, it could read the same card. Also, Windows update suggested driver ("SD Card Reader") doesn't work, Ricoh own drivers install properly but do the same behaviour. Other 3rd party driver suggestions from forums (Acer and Texas-Instruments FlashMedia) do not seem to install properly. I would post the PCI id if I had it, but it was just showing up as rimsptsk\diskricohmemorystickstorage (while it had the Ricoh Driver installed). Edit: If there are any lower level diagnostic utlities which might shed more light on it I'd welcome hearing of them. Anything which might show get it to put troubleshooting logs in the event log or identify chipsets or whatever... Update: Device details are: SD\VID_03&OID_5344&PID_SD04G&REV_8.0\5&4617BC3&0&0 : SD Memory Card PCI\VEN_8086&DEV_2934&SUBSYS_9025104D&REV_03\3&21436425&0&E8: Intel(R) ICH9 Family USB Universal Host Controller - 2934 PCI\VEN_1180&DEV_0476&SUBSYS_9025104D&REV_BA\4&1BD7BFCD&0&20F0: Ricoh R/RL/5C476(II) or Compatible CardBus Controller RIMSPTSK\DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00\MS0001: SD Storage Card PCI\VEN_1180&DEV_0592&SUBSYS_9025104D&REV_11\4&1BD7BFCD&0&24F0: Ricoh Memory Stick Host Controller WPDBUSENUMROOT\UMB\2&37C186B&1&STORAGE#VOLUME#_??_RIMSPTSK#DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00#MS0001#: O:\ STORAGE\VOLUME\{C82A81B8-5A4F-11E0-AACC-806E6F6E6963}#0000000000100000: Generic volume PCI\VEN_1180&DEV_0822&SUBSYS_9025104D&REV_21\4&1BD7BFCD&0&22F0: SDA Standard Compliant SD Host Controller ROOT\LEGACY_FVEVOL\0000 : Bitlocker Drive Encryption Filter Driver PCI\VEN_1180&DEV_0832&SUBSYS_9025104D&REV_04\4&1BD7BFCD&0&21F0: Ricoh 1394 OHCI Compliant Host Controller Now going to search for drivers for that.

    Read the article

  • Problems with repositories on CentOS 3.9

    - by rodnower
    Hello, I have CentOS 3.9 for i386. When I try to instal some thing with yum, i.e: yum install firefox or yum install firefox* or yum list firefox and so on, I get: +++++++++++++++++++ yum info firefox Gathering header information file(s) from server(s) Server: CentOS-3 - Addons Server: CentOS-3 - Base Server: CentOS-3 - Extras Server: CentOS-3 - Updates Server: Jason's Utter Ramblings Repo Finding updated packages Downloading needed headers Looking in Available Packages: Looking in Installed Packages: +++++++++++++++++++ Some time ago I had CentOS 5, and I had the similar problem (exept of firefox all other packages were not installed) and I spent very much time to find different repositories and so on. Now I have CentOS 3, and there is nothing I can install with yum. This is yum.conf content: +++++++++++++++++++ [main] cachedir=/var/cache/yum debuglevel=2 logfile=/var/log/yum.log pkgpolicy=newest distroverpkg=redhat-release installonlypkgs=kernel kernel-smp kernel-hugemem kernel-enterprise kernel-debug kernel-unsupported kernel-smp-unsupported kernel-hugemem-unsupported tolerant=1 exactarch=1 [utterramblings] name=Jason's Utter Ramblings Repo baseurl=http://www.jasonlitka.com/media/EL4/i386/ [base] name=CentOS-$releasever - Base baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ #released updates [update] name=CentOS-$releasever - Updates baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ #packages used/produced in the build but not released [addons] name=CentOS-$releasever - Addons baseurl=http://mirror.centos.org/centos/$releasever/addons/$basearch/ #additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ #[centosplus] #name=CentOS-$releasever - Plus #baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ #[testing] #name=CentOS-$releasever - Testing #baseurl=http://mirror.centos.org/centos/$releasever/testing/$basearch/ #[fasttrack] #name=CentOS-$releasever - Fasttrack #baseurl=http://mirror.centos.org/centos/$releasever/fasttrack/$basearch/ +++++++++++++++++++ The file is too long, so I littely edited it. So my question is: is there some "normal" one repository that have all basic thing like firefox and so that I will insert to this file and all will work fine? Thank you very much for ahead.

    Read the article

  • HA Proxy Stick-table and tcp-connection configuration

    - by Vladimir
    I am using HA Proxy HA-Proxy version 1.4.18 2011/09/16 I am trying to insert the following into /etc/init.d/haproxy.cfg file # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter # Monitors the number of request sent by an IP over a period of 10 seconds stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s) tcp-request connection track-sc1 src tcp-request connection reject if { src_get_gpc0 gt 0 } # Table definition stick-table type ip size 100k expire 30s store conn_cur(3s) # Allow clean known IPs to bypass the filter tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } # Shut the new connection as long as the client has already 10 opened tcp-request connection reject if { src_conn_cur ge 10 } tcp-request connection track-sc1 src I get the following error: [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:36] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:37] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:38] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:41] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:43] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:45] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:46] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [WARNING] 256/113143 (4627) : Proxy 'http_proxy': in multi-process mode, stats will be limited to process assigned to the current request. [ALERT] 256/113143 (4627) : Fatal errors found in configuration. [fail] Could you please tell me what is wrong with the code? Thanks!

    Read the article

  • OTRS upgrade 3.0 to 3.1 fails

    - by Valentin0S
    Today I've started upgrading OTRS from version 2.3 to 2.4 , 2.4 to 3.0 and 3.0 to 3.1. Everything went smoothly except the upgrade from 3.0 to 3.1 OTRS provides a few perl scripts which make the upgrade easier. I've used these scripts for each upgrade step. The upgrade from 3.0 to 3.1 fails at the following after using the upgrade script. scripts/DBUpdate-to-3.1.pl The error is : root@tickets:/opt/otrs# su - otrs $ scripts/DBUpdate-to-3.1.pl Migration started... Step 1 of 24: Refresh configuration cache... If you see warnings about 'Subroutine Load redefined', that's fine, no need to worry! Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAAuto.pm line 5. Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAuto.pm line 4. done. Step 2 of 24: Check framework version... done. Step 3 of 24: Creating DynamicField tables (if necessary)... done. DBD::mysql::db do failed: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)) at /opt/otrs-3.1.10/Kernel/System/DB.pm line 478. ERROR: OTRS-DBUpdate-to-3.1-10 Perl: 5.14.2 OS: linux Time: Wed Sep 5 15:36:20 2012 Message: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)), SQL: 'INSERT INTO dynamic_field (name, label, field_order, field_type, object_type, config, valid_id, create_time, create_by, change_time, change_by) VALUES (?, ?, ?, 'Text', 'Ticket', '--- {} ', 1, '2012-09-05 15:36:20' , 1, '2012-09-05 15:36:20' , 1)' Traceback (20405): Module: main::_DynamicFieldCreation (v1.85) Line: 466 Module: scripts/DBUpdate-to-3.1.pl (v1.85) Line: 95 Could not create new DynamicField TicketFreeKey1 at scripts/DBUpdate-to-3.1.pl line 477. Step 4 of 24: Create new dynamic fields for free fields (text, key, date)... $ Did anyone else face the same issue? Thanks in advance

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • haproxy and tomcat intermittent hangs

    - by Lorin
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Can't Copy to Clipboard from Vim

    - by maksim
    I'm running Vim 7.3 under Linux Mint 13 (using MATE) and I'm not able to save text to the system clipboard. I run Vim in the terminal and copy text from the terminal with CTRLINSERT. When I select text in Vim (either with the mouse or in visual mode), CTRLINSERT doesn't copy any text. In addition when I right-click, Copy is grayed out. Further, I can't write to the system buffer by yanking to the corresponding register using vim commands. However, I'm able to paste while in insert mode (using SHIFTINSERT or right-click paste). I'm also able to copy text directly from the terminal using the same technique, just not text from Vim. Here is my current ~/.vimrc. The relevant part is most likely set clipboard=autoselect,unnamed,exclude:cons\|linux. If I put finish at the top of my ~/.vimrc, I have the same issue, so I think the line is wrong, but I've tried set clipboard=unnamed and had the same behavior. Could there be another config file affecting Vim's behavior? How can I change my ~/.vimrc to allow me to copy text from Vim?

    Read the article

  • WordPress: can't access WordPress.com and other external sites?

    - by Rax Olgud
    Hello, I recently started a WordPress blog using hosting at MyDomain (they offer the application "natively"). The blog works fine, however I have two plugins I can't seem to install correctly. First, the WordPress.com Stats plugin requires the API Key. When I input it, I get the following message: Error from last API Key attempt: Your blog was unable to connect to WordPress.com. Please ask your host for help. (transport error - could not open socket: 110 Connection timed out) Second, the Akismet plugin is not configured. When I go to Akismet page to insert my API key, it has the following message: There was a problem connecting to the Akismet server. Please check your server configuration. I assume the two issues are related... I approached my hosting provider about the subject and all they said is that they don't support WordPress, only provide means to install it. To clarify, up to this point I have only been able to install plugins that don't require an API key. What can I do to diagnose the problem and fix it? As a work-around, are there comparable stats and anti-spam plugins that don't require an API key? Many thanks.

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Lars
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • Error 0x80300001 Installing Windows Server 2008 R2 64bit on FastTrak TX4660 RAID volume

    - by Konstantin Boyandin
    I am trying to install Windows Server 2008 R2 Enterprise 64bit on the following hardware: motherboard Intel DBS1200BTL Promise FastTrak TX4660 RAID controller 4 disks set up in two RAID1 arrays (handled by FastTrak) I am trying to install Windows so it would boot from RAID1 volume created with the FastTrak controller. The installation goes as in the manual, I insert the disk with the driver, select 'Browse' and specify the correct driver, it finds all the RAID arrays but notifies me that error 0x80300001 happened, Windows can't be installed on the mentioned RAID volumes, since they may not be bootable (even though the target RAID volume is the first in boot options list). If I proceed with the installation, Windows copies and unpacks itself, performs other standard actions after that. After the computer is restarted, it won't boot (Windows Boot Manager appears in the boot devices list; however, neither it nor the RAID volume itself does not boot). Is it a known problem? I can attach the boot disks to the motherboard and use its RAID capabilities instead, but I'd prefer FastTrak ones. Driver version is 1.3.0.4. Thanks.

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • Oracle 11gR2: NLS_CHARACTERSET accidentally removed with an UPDATE-Query

    - by Marco Nätlitz
    Hi folks, I have a fresh installation of Oracle 11gr2_x64 on CentOS. After the installation I wanted to get productive and started to import my dumps. One of the dumps caused characterset error so I tried to change the systems character-set to the one specified in the dump. I ran a statement like this: UPDATE nls_database_parameters SET parameter='WS....' WHERE parameter=’NLS_CHARACTERSET’; As you can see: I have written the value of the character-set in the parameter column instead of the value column. I guess I was just too much thinking about the problem instead of checking what I am typing there. After the query the parameter "NLS_CHARACTERSET" is gone and the server reports that the characterset is "(null)". I want to put the "NLS_CHARACTERSET" paramater back in the table but don't know how. If I try to do something like this INSERT INTO nls_database_parameters (PARAMETERS, VALUE) VALUES ("NLS_CHARACTERSET", "AL32UTF8"); I get the error: Fehler bei Befehlszeile:1 Spalte:84 Fehlerbericht: *Cause: SQL-Fehler: ORA-00984: Spalte hier nicht zulässig *Action: 00984. 00000 - "column not allowed here" Sorry that the error message is in German but it contains the Oracle error code. Do you have any idea how I can fix that? Thanks and best regards Marco

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >