Search Results

Search found 19871 results on 795 pages for 'commit log'.

Page 163/795 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Flexible cloud file storage for a web.py app?

    - by benwad
    I'm creating a web app using web.py (although I may later rewrite it for Tornado) which involves a lot of file manipulation. One example, the app will have a git-style 'commit' operation, in which some files are sent to the server and placed in a new folder along with the unchanged files from the last version. This will involve copying the old folder to the new folder, replacing/adding/deleting the files in the commit to the new folder, then deleting all unchanged files in the old folder (as they are now in the new folder). I've decided on Heroku for the app hosting environment, and I am currently looking at cloud storage options that are built with these kinds of operations in mind. I was thinking of Amazon S3, however I'm not sure if that lets you carry out these kinds of file operations in-place. I was thinking I may have to load these files into the server's RAM and then re-insert them into the bucket, costing me a fortune. I was also thinking of Progstr Filer (http://filer.progstr.com/index.html) but that seems to only integrate with Rails apps. Can anyone help with this? Basically I want file operations to be as cheap as possible.

    Read the article

  • Workshop in Holland - and open questions

    - by Mike Dietrich
    Thanks to everybody visiting yesterday the Upgrade Workshop in Maarsen. I had lots of fun - and I hope you'd enjoy it, too :-) The slides, as always, can be downloaded from: http://apex.oracle.com/folien Use the Schluesselwort/Keyword: upgrade112 And thanks to all those of you sending feedback regarding "traget/destination" (will change it in the slides) and other topics such as Enterprise Manager Grid Control 11g. Enterprise Manager 11g will be launched on 22-APR-2010 - and you can join the event live if you will be accidentialy in New York:http://www.oracle.com/enterprisemanager11g/index.html Thanks for this hint!!! Regarding the open questions: Will there be PSUs available for Intel Solaris? PSUs will be made available on nearly all platforms including Intel Solaris. Please see Note:882604.1 for platform information and Note:854428.1 for direct links to the PSU download location. Is COMMIT_WRITE=NOWAIT the default in patch set 10.2.0.4? I tried to verify this and neither couldn't find a bug entry nor a documentation saying the 10.2.0.4 has a different default setting (default behaviour is WAIT). Checked it in my 10.2.0.4 instances as well and there it is set to WAIT. If this parameter is not explicitly specified, then database commit behavior defaults to writing commit records to disk before control is returned to the client. If only IMMEDIATE or BATCH is specified, but not WAIT or NOWAIT, then WAIT mode is assumed. If only WAIT or NOWAIT is specified, but not IMMEDIATE or BATCH, then IMMEDIATE mode is assumed Please feedback to me if you have different experiences. Service Request escalation by telephone? Thanks for this update - I didn't realize that ;-) Now I know why it hasn't helped last month when I've updated an SR ... here's the official information on that: Note:199389.1 - Note has been updated on 24-FEB-2010. See the telephone number to Oracle support to request an escalation here: http://www.oracle.com/support/contact.html

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • Host your own private git repository via SSH

    - by kerry
    If you are like me you have tons of projects you would like to keep private but track with git, but do not want to pay a git host for a private plan. One of the problems is that most hosts scale their plans by project instead of users. Luckily, it is easy to host your own git repositories on any el cheapo host that provides ssh access. In the interest of full disclosure, I learned this trick from this blog post. I decided to recreate it in case the source material vanishes for some reason. To setup your host, login via ssh and run the following commands: mkdir ~/git/yourprojectname.git cd ~/git/yourprojectname.git git --bare init Then in your project directory (on your local machine): # setup your user info git config --global user.name "Firstname Lastname" git config --global user.email "[email protected]" # initialize the workspace git init git add . git commit -m "initial commit" git add remote origin ssh://[email protected]/~/git/yourprojectname.git git push origin master It’s that easy! To keep from entering your password every time add your public key to the server: Generate your key with ‘ssh-keygen -t rsa‘ on your local machine.  Then add the contents of the generated file to ~/.ssh/authorized_keys on your server.

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • Work Item Keyboard Shortcuts, Resolving Mercurial Work Items, WikiPlex 2.0

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed the latest version of the CodePlex software yesterday. Keyboard Shortcuts With this release, we have added a set of keyboard shortcuts for common tasks in the Issue Tracker.  This feature is a popular request in the CodePlex Issue Tracker.  The CodePlex team visits the issue tracker frequently when researching and considering new features.  If you haven’t visited it recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.   To view the available shortcuts, type ? from any page within the issue tracker to see this help dialog: You can see what each shortcut invokes below: Please give us feedback on this feature and let us know what additional shortcuts would be useful. Resolve Work Items When Pushing Mercurial Changes Another feature we added is the ability to resolve work items when push changes to your Mercurial repository, which has been available to our TFS / SVN users for quite some time. The required format is identical to the SVN format listed here. When committing your changes locally, add "Work Items: Id, AnotherId" to your commit message. When you push, CodePlex will detect this comment, add a commit message, and resolve the work item. WikiPlex Goes 2.0! CodePlex continues to improve WikiPlex, our open source wiki engine.  Wikiplex hit another major milestone today with the release of version 2.0!  We have added several new features, including:  interleaving ordered and unordered lists, specifying the height and width for images, a multi-line indentation macro, and a restructuring of some of the API. Visit Matt's announcement for more information on the release or grab the binaries via NuGet or CodePlex.

    Read the article

  • How do I stop my ethernet network connection from dropping?

    - by Sean Hill
    My ethernet-based network connection doesn't stay up consistently. I'm running a ping against the gateway and it will: Work for a minute Freeze, time out, or give multi-second response times Repeat If it's stuck and I disable/enable networking through the network manager applet everything will work fine again for a minute. After 280 packets transmitted I'm getting 41% packet loss. I've tried a different cable and connection to the gateway but this had no effect. The distance to the gateway is just about 3 feet. Seems to work fine if I switch over to Windows, but Ubuntu is my main OS and I can't even use it right now as I depend on the network. My setup... OS: Ubuntu 11.04, dual-booting Windows 7 Mobo: Gigabyte Z68X-UD4-B3 CPU: Intel Core i7 2600K Edit A little clarification... Network Manager is still showing me as connected, but I am unable to reach to gateway or anything beyond. At no point does NM suggest the connection is lost and calling ifconfig shows that I still have an IP address. I tried connecting to a different gateway with a different cable and the same problem arises. As requested: lspci | grep -i eth 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) dmesg | tail -f [ 14.024709] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0 [ 14.026443] EXT4-fs (sda7): re-mounted. Opts: commit=0 [ 14.176101] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. [ 23.917731] eth0: no IPv6 routers present [ 726.109697] r8169 0000:07:00.0: eth0: link up [ 733.169494] r8169 0000:07:00.0: eth0: link up [ 753.930119] r8169 0000:07:00.0: eth0: link up [ 880.787332] r8169 0000:07:00.0: eth0: link up [ 1159.161283] r8169 0000:07:00.0: eth0: link up [ 1406.623550] r8169 0000:07:00.0: eth0: link up Edit @roland-taylor: Network is always available under Windows. Pings do not timeout, applications do not complain of no network availability, large downloads are not interrupted or slowed.

    Read the article

  • 10.10 boots to command line login prompt

    - by greggory.hz
    I recently installed Ubuntu 10.10 on a computer that was previously running 10.04 (that worked fine). Now, each time I boot up, it starts up in a command line login prompt. I can login and it stays at the command line (as expected). I can then manually start gdm with sudo start gdm and it works fine. I can also enable compiz (using proprietary nvidia drivers) so I'm reasonably confident that it's not a driver problem (at least not in the sense that the drivers just flat out aren't working). Interestingly, if I leave it at the command prompt without logging in, after about 5 or 10 minutes, gnome starts up on its own. I'm not sure what is causing this. This is what dmesg | tail gives me after a manual start of gdm: [ 15.664166] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 270.18 Tue Jan 18 21:46:26 PST 2011 [ 15.991304] type=1400 audit(1297543976.953:11): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=990 comm="apparmor_parser" [ 16.606986] eth0: link up, 100Mbps, full-duplex, lpa 0xCDE1 [ 18.798506] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 26.740010] eth0: no IPv6 routers present [ 90.444593] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 189.252208] audit_printk_skb: 21 callbacks suppressed [ 189.252213] type=1400 audit(1297544150.218:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1876 comm="apparmor_parser" [ 189.252584] type=1400 audit(1297544150.218:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1876 comm="apparmor_parser" [ 351.159585] lo: Disabled Privacy Extensions

    Read the article

  • What is the basic loadout for an open source web developer?

    - by DeveloperDon
    Thus far, I have mainly been an embedded developer, but I am interested in having the flexibility to do mobile and web development as well. I think my tools should include the following, but probably a lot more. LAMP stack. Java IDEs like Eclipse and IntelliJ. JS frameworks like Dojo, Node.JS, AngularJS, (is it better to mix or commit to one?). Cloud solutions like EC2 and Azure (again, ok to mix or better to commit to one?). Google APIs. Continuous integration server. Source control tools with Git for new work, SVN, CVS, +others for imports. FTP server. Unit test runners. Bug trackers. OOAD modeling tools or plug-ins? Graphic design tools? Hosting services. XML / JSON / other markup? Content management, SEO? I am also interested to know if there are tools where it might be better to mix, match, or support all available (maybe for source control) and others where the full focus should be on one (maybe Java vs. C# or Windows vs. Linux vs. MacOS). Perhaps some of these questions need context of whether the projects will be greenfield (just pick favorite) or maintenance (no choice, each project continues legacy, sometimes with a poor tools).

    Read the article

  • CodePlex learns to talk to other services!

    CodePlex is now able to talk to other services! For example, if you want CodePlex to tell Trello to update cards on your Trello board, it can do it. Or if you want CodePlex to notify your Campfire chat room when updates are pushed, it can do that too. To start off, we are going to be adding support for the following services: Campfire – Notify a Campfire chat room when commits occur HipChat – Notify a HipChat chat room when commits occur Trello – Add commit summaries to Trello cards by referencing those cards in commit messages Twitter – Notify your Twitter followers when updates are pushed to your project In addition, we will continue to support our existing integrations with Windows Azure – Continuously deploy to Windows Azure on pushes (For Git and Hg projects) AppHarbor – Continuously deploy to AppHarbor on pushes To set up these integrations for your project, navigate to the project settings page as a project coordinator, and click on the services section as seen below:   While we are starting with these six services, the infrastructure is now in place to allow us to quickly roll out new integrations. We would love to hear which services and integrations you would like to see most on our suggestions page. We realize that there are some services and URLs that only make sense for your project to send notifications to. To support this scenario, we plan to add generic web hooks in the near future. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • How do you communicate improvements in tools and process to the development team?

    - by birryree
    Hi everyone, My team does a lot of internal tooling and infrastructure work - you can think of us as a small scale version of the teams Facebook, Etsy, Netflix, etc. who build all the infrastructure for scaling their services up to thousands/tens of thousands of servers and supporting millions of users. Lately, we've been running full steam ahead improving much of the tools we use internally, like tools for automatically creating new servers, setting up new application instances, etc. An end result of this has been decreased developer frustration, but increased 'ignorance' by most of the developer team about how to use our tools correctly and effectively. More often than not, my team will be asked by other teams to help them use the tools. Solutions we've thought up or things already in place: All our code is relatively simple and self-explanatory, with good comments where necessary, so developers could read the scripts. Counterargument: You can guess this isn't a particularly good idea, having people read our tools' code to figure out how to use it. All our code is committed to Subversion with very detailed commit messages about changes, developers could read the commit emails. Counterargument: Expect the developers to read all our commits? Ludicrous. Wiki - we have an internal company wiki, that we try to maintain with up to date information, but as we are moving so fast, the wiki has to keep pace as well. Counterargument: As mentioned, we move fast in my team, as more improvements on our tools are added daily. Again still relies on people to read something that might change constantly. Email the team? We could email the team when we have a glut of improvements to communicate. So as you can all see, we are trying to find new ideas, and explore options we haven't thought of yet. Anyone else ever been in a similar situation and have some guidance?

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Commenting on Commits

    The source code experience on CodePlex just got more social. Now, you can comment directly on a commit when viewing a project’s source code history. This feature acts as a nice compliment to the existing ability to comment on a pull request's commits. To get started with this feature, navigate to the commit you are interested in from the source tab. As you hover over the line you would like to comment on, an icon to add a comment will appear to the left. Click on the icon, and comment away!     Once you comment on a line, the person who committed the change and anyone involved in the conversation will be notified of the comment by email. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • JSON error Caused by: java.lang.NullPointerException

    - by user3821853
    im trying to make a register page on android using JSON. everytime i press register button on avd, i get an error "unfortunately database has stopped". i have a error on my logcat that i cannot understand. this my code. please someone help me. this my register.java import android.app.Activity; import android.app.ProgressDialog; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import java.util.ArrayList; import java.util.List; public class Register extends Activity implements OnClickListener{ private EditText user, pass; private Button mRegister; // Progress Dialog private ProgressDialog pDialog; // JSON parser class JSONParser jsonParser = new JSONParser(); //php register script //localhost : //testing on your device //put your local ip instead, on windows, run CMD > ipconfig //or in mac's terminal type ifconfig and look for the ip under en0 or en1 // private static final String REGISTER_URL = "http://xxx.xxx.x.x:1234/webservice/register.php"; //testing on Emulator: private static final String REGISTER_URL = "http://10.0.2.2:1234/webservice/register.php"; //testing from a real server: //private static final String REGISTER_URL = "http://www.mybringback.com/webservice/register.php"; //ids private static final String TAG_SUCCESS = "success"; private static final String TAG_MESSAGE = "message"; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.register); user = (EditText)findViewById(R.id.username); pass = (EditText)findViewById(R.id.password); mRegister = (Button)findViewById(R.id.register); mRegister.setOnClickListener(this); } @Override public void onClick(View v) { // TODO Auto-generated method stub new CreateUser().execute(); } class CreateUser extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(Register.this); pDialog.setMessage("Creating User..."); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } @Override protected String doInBackground(String... args) { // TODO Auto-generated method stub // Check for success tag int success; String username = user.getText().toString(); String password = pass.getText().toString(); try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("password", password)); Log.d("request!", "starting"); //Posting user data to script JSONObject json = jsonParser.makeHttpRequest( REGISTER_URL, "POST", params); // full json response Log.d("Registering attempt", json.toString()); // json success element success = json.getInt(TAG_SUCCESS); if (success == 1) { Log.d("User Created!", json.toString()); finish(); return json.getString(TAG_MESSAGE); }else{ Log.d("Registering Failure!", json.getString(TAG_MESSAGE)); return json.getString(TAG_MESSAGE); } } catch (JSONException e) { e.printStackTrace(); } return null; } protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted pDialog.dismiss(); if (file_url != null){ Toast.makeText(Register.this, file_url, Toast.LENGTH_LONG).show(); } } } } this is JSONparser.java import android.util.Log; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.utils.URLEncodedUtils; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import java.util.List; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(final String url) { // Making HTTP request try { // Construct the client and the HTTP request. DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); // Execute the POST request and store the response locally. HttpResponse httpResponse = httpClient.execute(httpPost); // Extract data from the response. HttpEntity httpEntity = httpResponse.getEntity(); // Open an inputStream with the data content. is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { // Create a BufferedReader to parse through the inputStream. BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); // Declare a string builder to help with the parsing. StringBuilder sb = new StringBuilder(); // Declare a string to store the JSON object data in string form. String line = null; // Build the string until null. while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } // Close the input stream. is.close(); // Convert the string builder data to an actual string. json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // Try to parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // Return the JSON Object. return jObj; } // function get json from url // by making HTTP POST or GET mehtod public JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this my error 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/Buffer Error? Error converting result java.lang.NullPointerException: lock == null 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/JSON Parser? Error parsing data org.json.JSONException: End of input at character 0 of 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database W/dalvikvm? threadid=15: thread exiting with uncaught exception (group=0xb0f37648) 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database E/AndroidRuntime? FATAL EXCEPTION: AsyncTask #4 java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:299) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) at java.util.concurrent.FutureTask.setException(FutureTask.java:219) at java.util.concurrent.FutureTask.run(FutureTask.java:239) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:841) Caused by: java.lang.NullPointerException at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:108) at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:74) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234)             at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230)             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)             at java.lang.Thread.run(Thread.java:841) 08-18 23:40:02.501 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.591 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.981 2000-2000/com.example.blackcustomzier.database E/WindowManager? Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here android.view.WindowLeaked: Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here at android.view.ViewRootImpl.<init>(ViewRootImpl.java:345) at android.view.WindowManagerGlobal.addView(WindowManagerGlobal.java:239) at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:69) at android.app.Dialog.show(Dialog.java:281) at com.example.blackcustomzier.database.Register$CreateUser.onPreExecute(Register.java:85) at android.os.AsyncTask.executeOnExecutor(AsyncTask.java:586) at android.os.AsyncTask.execute(AsyncTask.java:534) at com.example.blackcustomzier.database.Register.onClick(Register.java:70) at android.view.View.performClick(View.java:4240) at android.view.View.onKeyUp(View.java:7928) at android.widget.TextView.onKeyUp(TextView.java:5606) at android.view.KeyEvent.dispatch(KeyEvent.java:2647) at android.view.View.dispatchKeyEvent(View.java:7343) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchKeyEvent(PhoneWindow.java:1933) at com.android.internal.policy.impl.PhoneWindow.superDispatchKeyEvent(PhoneWindow.java:1408) at android.app.Activity.dispatchKeyEvent(Activity.java:2384) at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchKeyEvent(PhoneWindow.java:1860) at android.view.ViewRootImpl$ViewPostImeInputStage.processKeyEvent(ViewRootImpl.java:3791) at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:3774) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3483) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:3540) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3516) at android.view.ViewRootImpl$ImeInputStage.onFinishedInputEvent(ViewRootImpl.java:3666) at android.view.inputmethod.InputMethodManager$PendingEvent.run(InputMethodManager.java:1982) at android.view.inputmethod.InputMethodManager.invokeFinishedInputEventCallback(InputMethodManager.java:1698) at android.view.inputmethod.InputMethodManager.finishedInputEvent(InputMethodManager.java:1689) at android.view.inputmethod.InputMethodManager$ImeInputEventSender.onInputEventFinished(InputMethodManager.java:1959) at android.view.InputEventSender.dispatchInputEventFinished(InputEventSender.java:141) at android.os.MessageQueue.nativePollOnce(Native Method) at android.os.MessageQueue.next(MessageQueue.java:132) at android.os.Looper.loop(Looper.java:124) at android.app.ActivityThread.main(ActivityThread.java:5103) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:525) at com.android.internal.os.ZygoteInit$MethodAndArgsCal please help me to solve this thx

    Read the article

  • Good SQL error handling in Strored Procedure

    - by developerit
    When writing SQL procedures, it is really important to handle errors cautiously. Having that in mind will probably save your efforts, time and money. I have been working with MS-SQL 2000 and MS-SQL 2005 (I have not got the opportunity to work with MS-SQL 2008 yet) for many years now and I want to share with you how I handle errors in T-SQL Stored Procedure. This code has been working for many years now without a hitch. N.B.: As antoher "best pratice", I suggest using only ONE level of TRY … CATCH and only ONE level of TRANSACTION encapsulation, as doing otherwise may not be 100% sure. BEGIN TRANSACTION; BEGIN TRY -- Code in transaction go here COMMIT TRANSACTION; END TRY BEGIN CATCH -- Rollback on error ROLLBACK TRANSACTION; -- Raise the error with the appropriate message and error severity DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int; SELECT @ErrMsg = ERROR_MESSAGE(), @ErrSeverity = ERROR_SEVERITY(); RAISERROR(@ErrMsg, @ErrSeverity, 1); END CATCH; In conclusion, I will just mention that I have been using this code with .NET 2.0 and .NET 3.5 and it works like a charm. The .NET TDS parser throws back a SQLException which is ideal to work with.

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var john = new Person("John Galt", 50); console.log(john.toString()); var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • CodePlex Daily Summary for Monday, June 06, 2011

    CodePlex Daily Summary for Monday, June 06, 2011Popular ReleasesSharepoint 2010 Diagnostic Log Compression: Sharepoint 2010 Diagnostic Log Compression v1.0: ====================== Requirements: Microsoft Sharepoint 2010 Foundation or Microsoft Sharepoint 2010 Server Release Version : 1.0PowerShell SQL Developer Tools: SQLDevTools_Module (folder) Version 0.1.0: SQLDevTools Module version 0.1.0(alpha release) Just unzip the file and move the folder "SQLDevTools" into your User "WindowsPowerShell\Modules" folder. "C:\Users\UserName...\Documents\WindowsPowerShell\Modules\SQLDevTools" Then, to load *module at either the PowerShell Console or ISE, use: Import-Module SQLDevTools To remove module use: Remove-Module SQLDevTools Again, you need have SQL Server 2008 (or above). or the standalone version of "SQL Server Management Studio 2008 R2 " installed ...ShowUI: Write-UI -in PowerShell: ShowUI: ShowUI is a PowerShell module to help you write rich user interfaces in script.SQL Compact Query Analyzer: Build 0.2.0.0: Beta build of SQL Compact Query Analyzer Features: - Execute SQL Queries against a SQL Server Compact Edition database - Easily edit the contents of the database - Supports SQLCE 3.5 and 4.0 Prerequisites: - .NET Framework 4.0Media Companion: MC 3.405b weekly: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! If you are upgrading from a version of MC before 3.404 then..... Due to an issue where the date added & the full genre information was not being read into the Movie cache, it is highly recommended that you perform a Rebuild Movies when first running this latest version. This will read in the information from the ...Babylon Toolkit: Babylon.Toolkit v1.0.4: Note about samples: In order to run samples, you need to configure visual studio to run them as an "Out-of-browser application". in order to do that, go to the property page of a sample project, go to the Debug tab, and check the "Out-of-browser application" radio. New features : New Effects BasicEffect3Lights (3 dir lights instead of 1 position light) CartoonEffect (work in progress) SkinnedEffect (with normal and specular map support) SplattingEffect (for multi-texturing with smooth ...SizeOnDisk: 1.0.8.2: With installerTerrariViewer: TerrariViewer v2.5: Added new items associated with Terraria v1.0.3 to the character editor. Fixed multiple bugs with Piggy Bank EditorySterling NoSQL OODB for .NET 4.0, Silverlight 4 and 5, and Windows Phone 7: Sterling OODB v1.5: Welcome to the Sterling 1.5 RTM. This version is backwards compatible without modification to the 1.4 beta. For the 1.0, you will need to upgrade your database. Please see this discussion for details. You must modify your 1.0 code for persistence. The 1.5 version defaults to an in-memory driver. To save to isolated storage or use one of the new mechanisms, see the available drivers and pass an instance of the appropriate one to your database (different databases may use different drivers). ...patterns & practices: Project Silk: Project Silk Community Drop 10 - June 3, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Application Notifications" chapter. Updated "Server-Side Implementation" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To install and run the reference implementation, you must perform the fol...Claims Based Identity & Access Control Guide: Release Candidate: Highlights of this release This is the release candidate drop of the new "Claims Identity Guide" edition. In this release you will find: All code samples, including all ACS v2: ACS as a Federation Provider - Showing authentication with LiveID, Google, etc. ACS as a FP with Multiple Business Partners. ACS and REST endpoints. Using a WP7 client with REST endpoints. All ACS specific chapters. Two new chapters on SharePoint (SSO and Federation) All revised v1 chapters We are now ...Terraria Map Generator: TerrariaMapTool 1.0.0.4 Beta: 1) Fixed the generated map.html file so that the file:/// is included in the base path. 2) Added the ability to use parallelization during generation. This will cause the program to use as many threads as there are physical cores. 3) Fixed some background overdraw.Caliburn Micro: WPF, Silverlight and WP7 made easy.: Caliburn.Micro v1.1 RTW: Download ContentsDebug and Release Assemblies Samples Changes.txt License.txt Release Highlights For WP7A new Tombstoning API based on ideas from Fluent NHibernate A new Launcher/Chooser API Strongly typed Navigation SimpleContainer included The full phone lifecycle is made easy to work with ie. we figure out whether your app is actually Resurrecting or just Continuing for you For WPFSupport for the Client Profile Better support for WinForms integration All PlatformsA power...VidCoder: 0.9.1: Added color coding to the Log window. Errors are highlighted in red, HandBrake logs are in black and VidCoder logs are in dark blue. Moved enqueue button to the right with the other control buttons. Added logic to report failures when errors are logged during the encode or when the encode finishes prematurely. Added Copy button to Log window. Adjusted audio track selection box to always show the full track name. Changed encode job progress bar to also be colored yellow when the enco...AutoLoL: AutoLoL v2.0.1: - Fixed a small bug in Auto Login - Fixed the updaterEPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.9.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the server This version has been updated to .Net Framework 3.5 New Features Data Validation. PivotTables (Basic functionalliy...) Support for worksheet data sources. Column, Row, Page and Data fields. Date and Numeric grouping Build in styles. ...and more And some minor new features... Ranges Text-Property|Get the formated value AutofitColumns-method to set the column width from the content of the range LoadFromCollection-metho...jQuery ASP.Net MVC Controls: Version 1.4.0.0: Version 1.4.0.0 contains the following additions: Upgraded to MVC 3.0 Upgraded to jQuery 1.6.1 (Though the project supports all jQuery version from 1.4.x onwards) Upgraded to jqGrid 3.8 Better Razor View-Engine support Better Pager support, includes support for custom pagers Added jqGrid toolbar buttons support Search module refactored, with full suport for multiple filters and ordering And Code cleanup, bug-fixes and better controller configuration support.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.8b: Changes: - fix critical issue 15922(AccessViolationException) once and for all ...update is strongly recommended Known Issues: - some addin ribbon examples has a COM Register function with missing codebase entry(win32 registry) ...the problem is only affected to c# examples. fixed in latest source code. NetOffice\ReleaseTags\NetOffice Release 0.8.rar Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:...................Serviio for Windows Home Server: Beta Release 0.5.2.0: Ready for widespread beta. Synchronized build number to Serviio version to avoid confusion.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta4: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta4 2011-5-31?? ???Bilibili.us????? ???? ?? ???"????" ???Bilibili.us??? ??????? ?? ??????? ?? ???????? ?? ?? ???Bilibili.us?????(??????????????????) ??????(6.cn)?????(????) ?? ?????Acfun?????????? ?????????????? ???QQ???????? ????????????Discussion...New Projects.NET library for Audiogalaxy: AgNet is a .NET Client library for the Audiogalaxy APIAsuro scanner: A asuro programm which can recognize light values and perform predefined actions for them. This was realised for school project.AutoComplete.Net (for Asp.Net and Asp.Net MVC): AutoComplete.Net is a wrapper around the JQuery UI Autocomplete widget. It makes it easier to add autocomplete functionalities without having to bother with tons of javascript. Includes a WebControl for your Asp.Net web applications and a bunch of extension methods for your Asp.Net MVC web applications. Their purpose is to provide developers with an easy interface to the cool JQuery UI autocomplete Widget. It's developed in C#.AveSoft Advance Keylogger: Free and completely open source keystroke logging application, allows you to capture and record all keyboard input to file. Program is extremely small and lightweight, can run in a hidden mode, upload log file to specified web server. Due to open source nature, software is secure and transparent. Main Features Captures all keyboard input Caching and low level hooks uses minimal amount of CPU cycles Highly customizable Hidden mode Windows integration Log file...DDDProjector: DDDPrjector ?、DDD (Domain Driven Design: ????????) ?????????????????????????????????????、C# ???????????????????。 ???????????????????? C# ????????????????、??????????????????????????。 O/R ?????? NHibernate ??????。DogTouch: DogTouchEVMDOCS Reports: This project contain diploma of student VSTU. The project is devoted to automating the learning process at the university. Contains reports in OpenOfficeFidely - Search Query Compilation Framework: Fidely is a framework to implement a search query compiler that parses a search query string and builds up an expression tree (Expression<Func<T, bool>>) to filter a collection object. Fidely allows you to customize processes of parsing a query string and building up an expression tree, so you can create various search query compilers. The goal of this project is to make it easy to implement collection filtering features into your application.Frets: Frets is a library for creating and printing guitar chord diagrams. FuncLib: C# automatic differentiation library, transparent use with operator overloading, unlimited order of differentiation and unlimited number of variables, on-the-fly function compilation to IL code for very fast evaluation.ITAP: a management system for HCM IU's IELTS simulation examjQXB/M Expression Binder: JQ Expression Binder is a lightweigth framework (10 kbytes) for the association between javascript objects and HTML elements. Based on jQuery provides a natural and flexible way to process the data exchanged between client and server in a real WEB 2.0 paradigm. Using a few custom attributes to decorate yout HTML tags now you are enabled to take complete control over the managing of your business data directly inside yours web pages. Easy integration with ASP.NET WEBForm web applications, ...Log Processor: This software helps you to automatically analyze and process data in files (such as plain text logs, binary dumps...) based on : - plugins (for multiple input/output file types support) - plain text rules files (for a complete customization of the analysis needed) You'll then be able to easily produce stats reports from you application logs.MiniIndexer: A web application the demonstrates how to make a simple indexing search engine backed by a relational database. Monads: Monads helpers for .NET, Silverlight and Windows Phone 7.Movie Browser: Movie Browser helps collecting information from imdb and manage personal movie folders.Opc Configurable Xml Server/Service (OCXS): A Microsoft .NET 4.0 C# OPC XML Data Access 1.0 Server/ServiceSQL DAL: Data access layerTankMate: A simple Visual Basic application that lists and manages all your fish tanks. software initially designed to list fish tanks and enter their width/height/depth in inches and it display their volume in litres tank calculator was then expanded to alert if water needs changing after set interval of days number of days between changes is settable for each tank any sound file can be used for alert sounds has online update feature to keep it up to date Twoosh!: Simply, a fully-functional Twitter client for Windows... Twoosh makes it easier for Tweeple to Tweet. You'll no longer have to wait for the Twitter page to load or see the fail whale error page. It's developed in Visual Basic .NET 2008.WCF RIA Services for Azure Tables and SQL/Azure: Th project demos RIA services with POCO objects using two different back end data storage technologies, SQL/Azure and Azure Table Storage. The example shows a simple association between two tables - department and employee for read, insert and update operationsWindows Azure Storage Console Client: Windows Azure Storage Console ClientXNA 4.0 VirtualCompass: VirtualCompass is an incredibly simple and easy to use library for path finding in Microsoft's XNA 4.0 . It implements A* (A-star) path finding techniques to find the shortest path. This is an alpha status project. Limited testing and optimization thus far. Developed in C#.

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

  • Automating custom software installation in a zone

    - by mgerdts
    In Solaris 11, the internals of zone installation are quite different than they were in Solaris 10.  This difference allows the administrator far greater control of what software is installed in a zone.  The rules in Solaris 10 are simple and inflexible: if it is installed in the global zone and is not specifically excluded by package metadata from being installed in a zone, it is installed in the zone.  In Solaris 11, the rules are still simple, but are much more flexible:  the packages you tell it to install and the packages on which they depend will be installed. So, where does the default list of packages come from?  From the AI (auto installer) manifest, of course.  The default AI manifest is /usr/share/auto_install/manifest/zone_default.xml.  Within that file you will find:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data> So, the default installation will install pkg:/group/system/solaris-small-server.  Cool.  What is that?  You can figure out what is in the package by looking for it in the repository with your web browser (click the manifest link), or use pkg(1).  In this case, it is a group package (pkg:/group/), so we know that it just has a bunch of dependencies to name the packages that really wants installed. $ pkg contents -t depend -o fmri -s fmri -r solaris-small-server FMRI compress/bzip2 compress/gzip compress/p7zip ... terminal/luit terminal/resize text/doctools text/doctools/ja text/less text/spelling-utilities web/wget If you would like to see the entire manifest from the command line, use pkg contents -r -m solaris-small-server. Let's suppose that you want to install a zone that also has mercurial and a full-fledged installation of vim rather than just the minimal vim-core that is part of solaris-small-server.  That's pretty easy. First, copy the default AI manifest somewhere where you will edit it and make it writable. # cp /usr/share/auto_install/manifest/zone_default.xml ~/myzone-ai.xml # chmod 644 ~/myzone-ai.xml Next, edit the file, changing the software_data section as follows:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>                 <name>pkg:/developer/versioning/mercurial</name>                <name>pkg:/editor/vim</name>             </software_data> To figure out  the names of the packages, either search the repository using your browser, or use a command like pkg search hg. Now we are all ready to install the zone.  If it has not yet been configured, that must be done as well. # zonecfg -z myzone 'create; set zonepath=/zones/myzone' # zoneadm -z myzone install -m ~/myzone-ai.xml A ZFS file system has been created for this zone. Progress being logged to /var/log/zones/zoneadm.20111113T004303Z.myzone.install Image: Preparing at /zones/myzone/root. Install Log: /system/volatile/install.15496/install_log AI Manifest: /tmp/manifest.xml.XfaWpE SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: myzone Installation: Starting ... Creating IPS image Installing packages from: solaris origin: http://localhost:1008/solaris/54453f3545de891d4daa841ddb3c844fe8804f55/ DOWNLOAD PKGS FILES XFER (MB) Completed 169/169 34047/34047 185.6/185.6 PHASE ACTIONS Install Phase 46498/46498 PHASE ITEMS Package State Update Phase 169/169 Image State Update Phase 2/2 Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 531.813 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/myzone/root/var/log/zones/zoneadm.20111113T004303Z.myzone.install Now, for a few things that I've seen people trip over: Ignore that bit about man pages - it's wrong.  Man pages are already installed so long as the right facet is set properly.  And that's a topic for another blog entry. If you boot the zone then just use zlogin myzone, you will see that services you care about haven't started and that svc:/milestone/config:default is starting.  That is because you have not yet logged into the console with zlogin -C myzone. If the zone has been booted for more than a very short while when you first connect to the zone console, it will seem like the console is hung.  That's not really the case - hit ^L (control-L) to refresh the sysconfig(1M) screen that is prompting you for information.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >