Search Results

Search found 24094 results on 964 pages for 'console log'.

Page 229/964 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Translate Basic Config.groovy log4j DSL to external log4j.properties

    - by Stephen Swensen
    The following is a basic log4j configuration inside Config.groovy using the log4j DSL with Grails 1.2, it works as expected (log all errors to the given file): log4j = { appenders { file name:'file', file:"c:/error.log" } error 'grails.app' root { error 'file' } } How would one translate this into a properties style log4j configuration file? The following does not work: log4j.rootLogger=ERROR, FA log4j.appender.FA=org.apache.log4j.FileAppender log4j.appender.FA.File=C:/error.log log4j.logger.grails.app=ERROR, FA I suspect it has something to do with the translation of error 'grails.app' but I really don't know. If it makes any difference, the properties file is configured externally: grails.config.locations = ["file:${basedir}/extconf/log4j.properties"] All I really want is an external log4j properties file which logs all application exceptions to a file.

    Read the article

  • SmtpClient.SendAsync code review

    - by Lieven Cardoen
    I don't know if this is the way it should be done: { ... var client = new SmtpClient {Host = _smtpServer}; client.SendCompleted += SendCompletedCallback; var userState = mailMessage; client.SendAsync(mailMessage, userState); ... } private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. var mailMessage= (MailMessage)e.UserState; if (e.Cancelled) { Log.Info(String.Format("[{0}] Send canceled.", mailMessage)); } if (e.Error != null) { Log.Error(String.Format("[{0}] {1}", mailMessage, e.Error)); } else { Log.Info("Message sent."); } mailMessage.Dispose(); } Disposing the mailMessage after the client.SendAsync(...) throws an error. So I need to dispose it in the Callback handler.

    Read the article

  • Getting MySQL work with Entity Framework 4.0

    - by DigiMortal
    Does MySQL work with Entity Framework 4.0? The answer is: yes, it works! I just put up one experimental project to play with MySQL and Entity Framework 4.0 and in this posting I will show you how to get MySQL data to EF. Also I will give some suggestions how to deploy your applications to hosting and cloud environments. MySQL stuff As you may guess you need MySQL running somewhere. I have MySQL installed to my development machine so I can also develop stuff when I’m offline. The other thing you need is MySQL Connector for .NET Framework. Currently there is available development version of MySQL Connector/NET 6.3.5 that supports Visual Studio 2010. Before you start download MySQL and Connector/NET: MySQL Community Server Connector/NET 6.3.5 If you are not big fan of phpMyAdmin then you can try out free desktop client for MySQL – HeidiSQL. I am using it and I am really happy with this program. NB! If you just put up MySQL then create also database with couple of table there. To use all features of Entity Framework 4.0 I suggest you to use InnoDB or other engine that has support for foreign keys. Connecting MySQL to Entity Framework 4.0 Now create simple console project using Visual Studio 2010 and go through the following steps. 1. Add new ADO.NET Entity Data Model to your project. For model insert the name that is informative and that you are able later recognize. Now you can choose how you want to create your model. Select “Generate from database” and click OK. 2. Set up database connection Change data connection and select MySQL Database as data source. You may also need to set provider – there is only one choice. Select it if data provider combo shows empty value. Click OK and insert connection information you are asked about. Don’t forget to click test connection button to see if your connection data is okay. If everything works then click OK. 3. Insert context name Now you should see the following dialog. Insert your data model name for application configuration file and click OK. Click next button. 4. Select tables for model Now you can select tables and views your classes are based on. I have small database with events data. Uncheck the checkbox “Include foreign key columns in the model” – it is damn annoying to get them away from model later. Also insert informative and easy to remember name for your model. Click finish button. 5. Define your classes Now it’s time to define your classes. Here you can see what Entity Framework generated for you. Relations were detected automatically – that’s why we needed foreign keys. The names of classes and their members are not nice yet. After some modifications my class model looks like on the following diagram. Note that I removed attendees navigation property from person class. Now my classes look nice and they follow conventions I am using when naming classes and their members. NB! Don’t forget to see properties of classes (properties windows) and modify their set names if set names contain numbers (I changed set name for Entity from Entity1 to Entities). 6. Let’s test! Now let’s write simple testing program to see if MySQL data runs through Entity Framework 4.0 as expected. My program looks for events where I attended. using(var context = new MySqlEntities()) {     var myEvents = from e in context.Events                     from a in e.Attendees                     where a.Person.FirstName == "Gunnar" &&                             a.Person.LastName == "Peipman"                     select e;       Console.WriteLine("My events: ");       foreach(var e in myEvents)     {         Console.WriteLine(e.Title);     } }   Console.ReadKey(); And when I run it I get the result shown on screenshot on right. I checked out from database and these results are correct. At first run connector seems to work slow but this is only the effect of first run. As connector is loaded to memory by Entity Framework it works fast from this point on. Now let’s see what we have to do to get our program work in hosting and cloud environments where MySQL connector is not installed. Deploying application to hosting and cloud environments If your hosting or cloud environment has no MySQL connector installed you have to provide MySQL connector assemblies with your project. Add the following assemblies to your project’s bin folder and include them to your project (otherwise they are not packaged by WebDeploy and Azure tools): MySQL.Data MySQL.Data.Entity MySQL.Web You can also add references to these assemblies and mark references as local so these assemblies are copied to binary folder of your application. If you have references to these assemblies then you don’t have to include them to your project from bin folder. Also add the following block to your application configuration file. <?xml version="1.0" encoding="utf-8"?> <configuration> ...   <system.data>     <DbProviderFactories>         <add              name=”MySQL Data Provider”              invariant=”MySql.Data.MySqlClient”              description=”.Net Framework Data Provider for MySQL”              type=”MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data,                   Version=6.2.0.0, Culture=neutral,                   PublicKeyToken=c5687fc88969c44d”          />     </DbProviderFactories>   </system.data> ... </configuration> Conclusion It was not hard to get MySQL connector installed and MySQL connected to Entity Framework 4.0. To use full power of Entity Framework we used InnoDB engine because it supports foreign keys. It was also easy to query our model. To get our project online we needed some easy modifications to our project and configuration files.

    Read the article

  • Symfony2: automatically logging in users from their Windows session

    - by Paul Maclean
    In Symfony2 I have built an intranet. It currently uses the FOSUserBundle and an LDAP bundle to log users in, and I would like to add the functionality to log in user from their session in Windows. I found an NTLM script for PHP and an updated version of it, but I haven't been able to incorporate them into Symfony2. I also found an NTLM bundle for Symfony2, but it was written for an older version of Symfony and it is not maintained anymore. I was unable to rewrite it and get it to work. My question is; how could I automatically log in users from their Windows session in my Symfony2-app, in addition to the already present LDAP functionality? What would be the best and easiest way?

    Read the article

  • Not getting a response when using Android Async HTTP (from loopj)

    - by conor
    I am using the Async Http library from loopj.com and also the sample code from the site. The problem is that when the request is made I don't get a response. I have even overridden the onFinish() function which isn't getting fire either. I am using the sample code from their site which is as follows: import com.loopj.android.http.AsyncHttpClient; import com.loopj.android.http.AsyncHttpResponseHandler; Log.v("bopzy_debug", "Testing HTTP Connectivity"); System.out.println("123"); AsyncHttpClient client = new AsyncHttpClient(); client.get("http://www.google.com", new AsyncHttpResponseHandler() { @Override public void onSuccess(String response) { Log.v("bopzy_debug", response); } @Override public void onFinish() { Log.v("bopzy_debug", "Finished.."); } }); Any ideas on how to solve would be greatly appreciated, not really sure what is going on here.

    Read the article

  • UnboundLocalError: local variable 'rows' referenced before assignment

    - by patrick
    i'm trying to make a database connection by an other script. But the script didn't work propperly. and if I do a 'print' on the rows then I get the value 'null' But if I use a 'select * from incidents' query then i get the result from the table incidents. import database rows = database.database("INSERT INTO incidents VALUES(3 ,'test_title1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") #print database.database() print rows database.py script: import psycopg2 import sys import logfile def database(query): logfile.log(20, 'database.py', 'Executing...') con = None try: con = psycopg2.connect(database='incidents', user='ipfit5', password='tester') cur = con.cursor() #print query cur.execute(query) rows = cur.fetchall() con.commit() #test row does work #cur.execute("INSERT INTO incidents VALUES(3 ,'test_titel1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") except: logfile.log(40, 'database.py', 'Er is iets mis gegaan') logfile.log(40, 'database.py', str(sys.exc_info())) finally: if con: con.close() return rows

    Read the article

  • SQL CLR Assembly Error 80131051 when late binding to a registered C# COM .dll

    - by Shanubus
    I must have hit an unusual one, because I can't find any reference to this specific failing anywhere... Scenario: I have a legacy SQL function used to transform(encrypt) data. This function is called from within many stored procedures used by multiple applications. I say this, because the obvious answer of 'just call it from your code' is not really an option (or at least one I'd prefer not explore). The legacy function used sp_OA with an ActiveX dll on SQL2000 to perform its work. The new function is targeted at SQL2008 x64. I am ditching the sp_OA call in favor of CLR assembly; and am getting rid of the ActiveX dll and using a COM+ .dll (3rd party) to perform the same work. This 3rd party COM+ is required to be used based on spec given to me, so can't get rid of this piece either. Problem: After multiple attempts at getting this to work I have eliminated the following approaches 1) Create a Sql Assembly to call the local COM+ directly -- Can't do this as it requires a reference to System.EnterpriseServices. Including this requires that a whole slew of unsupported assemblies be registered which I don't want. The COM+ requires it's methods to be accessed via an Interface, so my attempts at late binding to it directly have not been successful (late binding would allow me to drop the unsupported references). 2) Create a Sql Assembly which references a C# class library that then calls the COM+. -- Same issue as #1; since the referenced dll uses System.EnterpriseServices and will be added as a dependency when referenced in the Sql Assembly, again trying to load all the unsupported libraries 3) Create a Sql Assembly which late binds to an ActiveX COM dll that calls the COM+. -- Worked in my dev environment, but can't go to x64 in production with ActiveX dll's written in VB6 (not to mention I hate backtracking anyway)... again failure... I am now onto an approach that is almost working, with of course one last hangup. I now have -a Sql Assembly that late binds to a C# COM dll, eliminating the need for including System.EnterpriseServices and eliminating the need to reference the C# COM in the SqlAssembly itself. The C# COM does reference System.EnterpriseServices to call the COM+, but since I am late binding to it from the SqlAssembly, I bypass the need for Sql to actually load them as referenced assemblies. Works in debugger.. Works on my dev box when the SqlAssembly dll is referenced in a test console app and called directly Installs to Sql2008 just fine Executing the actual UDF works, but returns no data due to a failure reporting from the late bound dll! So the SqlAssembly is instanciated just fine. It actually fails on it's late binding to the C# COM, which is working from a test console app on the same machine. It appears to be a difference in behavior based on whether called from within the SQL UDF or not. Since it is working on the same box from my console app, I am assuming it's on the SQL side. My steps to install were. --Install the COM+ dll and ensure it can be called successfully (as from with in the console app) --Register the C# COM dll (which calls the COM+) and get it to the GAC (again proofed to be working from console app) --Create my Assymetric Key CREATE ASYMMETRIC KEY SqlCryptoKey FROM EXECUTABLE FILE = 'D:\SqlEx.dll' CREATE LOGIN SqlExLogin FROM ASYMMETRIC KEY SqlExKey GRANT UNSAFE ASSEMBLY TO SqlExLogin GO --Add the assembly CREATE ASSEMBLY SqlEx FROM 'D:\SqlEx.dll' WITH PERMISSION_SET = UNSAFE; GO --Create the function CREATE FUNCTION dbo.f_SqlEx( @clearText [nvarchar](512) ) RETURNS nvarchar(512) WITH EXECUTE AS CALLER AS EXTERNAL NAME SqlEx.[SqlEx.SqlEx].Ex GO With all that done, I can now call my function SELECT dbo.f_SqlEx('test') But get this error in the event log... Retrieving the COM class factory for component with CLSID {F69D6320-5884-323F-936A-7657946604BE} failed due to the following error: 80131051. I can't really provide direct code examples, due to internal security implications; but all the code itself seems to work, I am suspecting perms or something of the like... I just find it odd that I can't find any reference to error 80131051. If someone out there believe some 'indirect' code samples will help, I will be happy to provide. Any assistance is appreciated.

    Read the article

  • TimeoutException in simultaneous calls to WCF services from Silverlight application

    - by Alexander K.
    Analysing log files I've noticed that ~1% of service calls ended with TimeoutException on the Silverlight client side. The services (wcf) are quite simple and do not perform long computations. According the log all calls to the services are always processed in less that 1 sec (even when TimeoutException is occurred on the client!), so it is not server timeout. So what is wrong? Can it be configuration or network problem? How can I avoid it? What additional logging information can be helpful for localizing this issue? The only one workaround I've thought up is to retry service calls after timeout. I will appreciate any help on this issue! Update: On startup the application performs 17 service calls and 12 of them simultaneously (may it be cause of failure?). Update: WCF log has not contained useful information about this issue. It seems some service calls do not reach the server side.

    Read the article

  • SQL Server: Database stuck in "Restoring" state

    - by Ian Boyd
    i backed up a data: BACKUP DATABASE MyDatabase TO DISK = 'MyDatabase.bak' WITH INIT --overwrite existing And then tried to restore it: RESTORE DATABASE MyDatabase FROM DISK = 'MyDatabase.bak' WITH REPLACE --force restore over specified database And now the database is stuck in the restoring state. Some people have theorized that it's because there was no log file in the backup, and it needed to be rolled forward using: RESTORE DATABASE MyDatabase WITH RECOVERY Except that, of course, fails: Msg 4333, Level 16, State 1, Line 1 The database cannot be recovered because the log was not restored. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. And exactly what you want in a catastrophic situation is a restore that won't work. The backup contains both a data and log file: RESTORE FILELISTONLY FROM DISK = 'MyDatabase.bak' Logical Name PhysicalName ============= =============== MyDatabase C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase.mdf MyDatabase_log C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase_log.LDF

    Read the article

  • flush output in Bourne Shell

    - by n-alexander
    I use echo in Upstart scripts to log things: script echo "main: some data" >> log end script post-start script echo "post-start: another data" >> log end script Now these two run in parallel, so in the logs I often see: main: post-start: some data another data This is not critical, so I won't employ proper synching, but thought I'd turn auto flush ON to at least reduce this effect. Is there an easy way to do that? Update: yes, flushing will not properly fix it, but I've seen it help such situations to some degree, and this is all I need in this case. It's just that I don't know how to do it in Shell

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • log4j category

    - by Nuno Furtado
    I have the following on my log4j.properties log4j.rootLogger = debug, stdout, fileLog log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.fileLog = org.apache.log4j.RollingFileAppender log4j.appender.fileLog.File = C:/logs/services.log log4j.appender.fileLog.MaxFileSize = 256MB log4j.appender.fileLog.MaxBackupIndex = 32 #Category: ConsultaDados log4j.category.ConsultaDados=ConsultaDados log4j.appender.ConsultaDados=org.apache.log4j.DailyRollingFileAppender log4j.appender.ConsultaDados.layout=org.apache.log4j.PatternLayout log4j.appender.ConsultaDados.layout.ConversionPattern={%t} %d - [%p] %c: %m %n log4j.appender.ConsultaDados.file=C:/logs/consulta.log log4j.appender.ConsultaDados.DatePattern='.' yyyy-MM-dd-HH-mm And im creating my logger with : myLogger = Logger.getLogger("ConsultaDados"); But this doesnt log my calls to the file. they get thrown into the rootLogger Any ideas?

    Read the article

  • Session state in asp.net mvc

    - by tiff
    I would like to know how to use session state in a simple log in log out in asp.net mvc.. I have a code here in my controller that I've retrieved from my mysql database for my session log in..but I don't really know how to manipulate it.. <AcceptVerbs(HttpVerbs.Post)> _ Function Index(ByVal username As String, ByVal password As String, ByVal department As String) As ActionResult Dim user As DataTable user = Account.userSelect(username:=username, password:=password, department:=department) If user.Rows.Count = 0 Then Return RedirectToAction("Index", "Home") Else Session("username") = user.Rows(0).Item("username") Session("department") = user.Rows(0).Item("department") Return RedirectToAction("News", "Administration") End If End Function Thank you!

    Read the article

  • Add Trace methods to System.Diagnostics.TraceListner

    - by user200295
    I wrote a Log class derived from System.Diagnostics.TraceListener like so public class Log : TraceListener This acts as a wrapper to Log4Net and allows people to use System.Diagnostics Tracing like so Trace.Listeners.Clear(); Trace.Listeners.Add(new Log("MyProgram")); Trace.TraceInformation("Program Starting"); There is a request to add additional tracing levels then the default Trace ones (Error,Warning,Information) I want to have this added to the System.Diagnostics.Trace so it can be used like Trace.TraceVerbose("blah blah"); Trace.TraceAlert("Alert!"); Is there any way I can do this with an extension class? I tried public static class TraceListenerExtensions { public static void TraceVerbose(this Trace trace) {} } but nothing is being exposed on the trace instance being passed in :(

    Read the article

  • Adding WPF Text Writer Trace Listener in an Outlook Add In using wpf window/control

    - by Deepak N
    I'm working on a outlook 2003 AddIn using VSTO SE.We have few customized windows developed in WPF. It looks there are few client machines have problem with WPF rendering due to which there could be an exception due to addin is getting disabled. I added a outlook.exe.config and added trace listeners for wpf Trace sources. I set it up according this link. The console trace listener is working fine for me. But I'm not able get the TextWriterTraceListener working with config <add name="textListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="Trace.log" /> I tried giving absolute path for trace log file as "C:\Trace.log".The TextWriterTraceListener worked for a dummy wpf app with the same config. Am I missing anything here.

    Read the article

  • Determine the ContentObserver

    - by tommieb75
    I have this content observer that is watching on the Call Log: public class monitorCallLog extends ContentObserver{ private static final String TAG = "monitorCallLog"; public monitorCallLog(Handler handler) { super(handler); // TODO Auto-generated constructor stub } @Override public boolean deliverSelfNotifications() { return false; } @Override public void onChange(boolean selfChange){ Log.v(TAG, "[onChange] *** ENTER ***"); super.onChange(selfChange); // Code goes in here to handle the job of tracking.... Log.v(TAG, "[onChange] *** LEAVE ***"); } } Now... how can I determine the nature of the change on this URI content://call_log/calls? I want to check on it if a deletion has occurred on the said URI... but there is no way of knowing...this seems to apply on a query/delete/insert/update on said URI that triggers the onChange method.... any tips/suggestions?

    Read the article

  • java Finalize method call

    - by Rajesh Kumar J
    The following is my Class code import java.net.*; import java.util.*; import java.sql.*; import org.apache.log4j.*; class Database { private Connection conn; private org.apache.log4j.Logger log ; private static Database dd=new Database(); private Database(){ try{ log= Logger.getLogger(Database.class); Class.forName("com.mysql.jdbc.Driver"); conn=DriverManager.getConnection("jdbc:mysql://localhost/bc","root","root"); conn.setReadOnly(false); conn.setAutoCommit(false); log.info("Datbase created"); /*Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); conn=DriverManager.getConnection("jdbc:odbc:rmldsn"); conn.setReadOnly(false); conn.setAutoCommit(false);*/ } catch(Exception e){ log.info("Cant Create Connection"); } } public static Database getDatabase(){ return dd; } public Connection getConnection(){ return conn; } @Override protected void finalize()throws Throwable { try{ conn.close(); Runtime.getRuntime().gc(); log.info("Database Close"); } catch(Exception e){ log.info("Cannot be closed Database"); } finally{ super.finalize(); } } } This can able to Initialize Database Object only through getDatabase() method. The below is the program which uses the single Database connection for the 4 threads. public class Main extends Thread { public static int c=0; public static int start,end; private int lstart,lend; public static Connection conn; public static Database dbase; public Statement stmt,stmtEXE; public ResultSet rst; /** * @param args the command line arguments */ static{ dbase=Database.getDatabase(); conn=dbase.getConnection(); } Main(String s){ super(s); try{ stmt=conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE); start=end; lstart=start; end=end+5; lend=end; System.out.println("Start -" +lstart +" End-"+lend); } catch(Exception e){ e.printStackTrace(); } } @Override public void run(){ try{ URL url=new URL("http://localhost:8084/TestWeb/"); rst=stmt.executeQuery("SELECT * FROM bc.cdr_calltimestamp limit "+lstart+","+lend); while(rst.next()){ try{ rst.updateInt(2, 1); rst.updateRow(); conn.commit(); HttpURLConnection httpconn=(HttpURLConnection) url.openConnection(); httpconn.setDoInput(true); httpconn.setDoOutput(true); httpconn.setRequestProperty("Content-Type", "text/xml"); //httpconn.connect(); String reqstring="<?xml version=\"1.0\" encoding=\"US-ASCII\"?>"+ "<message><sms type=\"mt\"><destination messageid=\"PS0\"><address><number" + "type=\"international\">"+ rst.getString(1) +"</number></address></destination><source><address>" + "<number type=\"unknown\"/></address></source><rsr type=\"success_failure\"/><ud" + "type=\"text\">Hello World</ud></sms></message>"; httpconn.getOutputStream().write(reqstring.getBytes(), 0, reqstring.length()); byte b[]=new byte[httpconn.getInputStream().available()]; //System.out.println(httpconn.getContentType()); httpconn.getInputStream().read(b); System.out.println(Thread.currentThread().getName()+new String(" Request"+rst.getString(1))); //System.out.println(new String(b)); httpconn.disconnect(); Thread.sleep(100); } catch(Exception e){ e.printStackTrace(); } } System.out.println(Thread.currentThread().getName()+" "+new java.util.Date()); } catch(Exception e){ e.printStackTrace(); } } public static void main(String[] args) throws Exception{ System.out.println(new java.util.Date()); System.out.println("Memory-before "+Runtime.getRuntime().freeMemory()); Thread t1=new Main("T1-"); Thread t2=new Main("T2-"); Thread t3=new Main("T3-"); Thread t4=new Main("T4-"); t1.start(); t2.start(); t3.start(); t4.start(); System.out.println("Memory-after "+Runtime.getRuntime().freeMemory()); } } I need to Close the connection after all the threads gets executed. Is there any good idea to do so. Kindly help me out in getting this work.

    Read the article

  • Problem installing Ruby 1.9.2 with RVM on OSX 10.4

    - by questionmark
    Hi, I successfully installed Ruby 1.8.7 with RVM on OS 10.4. However, when I try to install 1.9.2, I get the following error: make: * [libruby.1.9.1.dylib] Error 1 Installation: [qm]$ rvm install 1.9.2 /Users/qm/.rvm/rubies/ruby-1.9.2-p136, this may take a while depending on your cpu(s)... % ruby-1.9.2-p136 - #fetching % ruby-1.9.2-p136 - #downloading ruby-1.9.2-p136, this may take a while depending on your connection...% ruby-1.9.2-p136 - #extracting ruby-1.9.2-p136 to /Users/qm/.rvm/src/ruby-1.9.2-p136% ruby-1.9.2-p136 - #extracted to /Users/qm/.rvm/src/ruby-1.9.2-p136% ruby-1.9.2-p136 - #configuring % ruby-1.9.2-p136 - #compiling % Error running 'make ', please read /Users/qm/.rvm/log/ruby-1.9.2-p136/make.log% There has been an error while running make. Halting the installation.% Looking at the end of the make log: MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 /usr/libexec/gcc/powerpc-apple-darwin8/4.0.1/libtool: internal link edit command failed make: * [libruby.1.9.1.dylib] Error 1 Thanks for any help/suggestions!

    Read the article

  • How to calculate bandwidth consumption for a Hosting Account using C# in ASP.Net Application?

    - by Steve Johnson
    HI all, I am working on SaaS Hosting Software. a large number of sites are hosted on the server. I am trying to calculate bandwidth consumption, (bytes transferred in and out) using C#, described Here using the MS Log Parser. In the above case, if the log files are deleted by the user or any administrator even, the bandwidth calculation will not be possible. Q1: *What is the standard way to measure the Bandwidth for various Hosting accounts (of websites) on a single server?* Q2: *If Log parser mechanism (as described above) is used, then how to take care of the security issue? Is there some system directory or event viewer logs or something which cannot be deleted except by the System account and contains bandwidth data?* Please point me in the right direction. Thanks

    Read the article

  • Store value of os.system or os.popen

    - by chrissygormley
    Hello, I want to grep the error's out of a log file and save the value as an error. When I use: errors = os.system("cat log.txt | grep 'ERROR' | wc -l") I get the return code that the command worked or not. When I use: errors = os.popen("cat log.txt | grep 'ERROR' | wc -l") I get what the command is trying to do. When I run this in the command line I get 3 as thats how many errors there are. Can anyone suggest another way? Thanks

    Read the article

  • Review my ASP.NET Authentication code.

    - by Niels Bosma
    I have had some problems with authentication in ASP.NET. I'm not used most of the built in authentication in .NET. I gotten some complaints from users using Internet Explorer (any version - may affect other browsers as well) that the login process proceeds but when redirected they aren't authenticated and are bounced back to loginpage (pages that require authentication check if logged in and if not redirect back to loginpage). Can this be a cookie problem? Do I need to check if cookies are enabled by the user? What's the best way to build authentication if you have a custom member table and don't want to use ASP.NET login controls? Here my current code: using System; using System.Linq; using MyCompany; using System.Web; using System.Web.Security; using MyCompany.DAL; using MyCompany.Globalization; using MyCompany.DAL.Logs; using MyCompany.Logging; namespace MyCompany { public class Auth { public class AuthException : Exception { public int StatusCode = 0; public AuthException(string message, int statusCode) : base(message) { StatusCode = statusCode; } } public class EmptyEmailException : AuthException { public EmptyEmailException() : base(Language.RES_ERROR_LOGIN_CLIENT_EMPTY_EMAIL, 6) { } } public class EmptyPasswordException : AuthException { public EmptyPasswordException() : base(Language.RES_ERROR_LOGIN_CLIENT_EMPTY_PASSWORD, 7) { } } public class WrongEmailException : AuthException { public WrongEmailException() : base(Language.RES_ERROR_LOGIN_CLIENT_WRONG_EMAIL, 2) { } } public class WrongPasswordException : AuthException { public WrongPasswordException() : base(Language.RES_ERROR_LOGIN_CLIENT_WRONG_PASSWORD, 3) { } } public class InactiveAccountException : AuthException { public InactiveAccountException() : base(Language.RES_ERROR_LOGIN_CLIENT_INACTIVE_ACCOUNT, 5) { } } public class EmailNotValidatedException : AuthException { public EmailNotValidatedException() : base(Language.RES_ERROR_LOGIN_CLIENT_EMAIL_NOT_VALIDATED, 4) { } } private readonly string CLIENT_KEY = "9A751E0D-816F-4A92-9185-559D38661F77"; private readonly string CLIENT_USER_KEY = "0CE2F700-1375-4B0F-8400-06A01CED2658"; public Client Client { get { if(!IsAuthenticated) return null; if(HttpContext.Current.Items[CLIENT_KEY]==null) { HttpContext.Current.Items[CLIENT_KEY] = ClientMethods.Get<Client>((Guid)ClientId); } return (Client)HttpContext.Current.Items[CLIENT_KEY]; } } public ClientUser ClientUser { get { if (!IsAuthenticated) return null; if (HttpContext.Current.Items[CLIENT_USER_KEY] == null) { HttpContext.Current.Items[CLIENT_USER_KEY] = ClientUserMethods.GetByClientId((Guid)ClientId); } return (ClientUser)HttpContext.Current.Items[CLIENT_USER_KEY]; } } public Boolean IsAuthenticated { get; set; } public Guid? ClientId { get { if (!IsAuthenticated) return null; return (Guid)HttpContext.Current.Session["ClientId"]; } } public Guid? ClientUserId { get { if (!IsAuthenticated) return null; return ClientUser.Id; } } public int ClientTypeId { get { if (!IsAuthenticated) return 0; return Client.ClientTypeId; } } public Auth() { if (HttpContext.Current.User.Identity.IsAuthenticated) { IsAuthenticated = true; } } public void RequireClientOfType(params int[] types) { if (!(IsAuthenticated && types.Contains(ClientTypeId))) { HttpContext.Current.Response.Redirect((new UrlFactory(false)).GetHomeUrl(), true); } } public void Logout() { Logout(true); } public void Logout(Boolean redirect) { FormsAuthentication.SignOut(); IsAuthenticated = false; HttpContext.Current.Session["ClientId"] = null; HttpContext.Current.Items[CLIENT_KEY] = null; HttpContext.Current.Items[CLIENT_USER_KEY] = null; if(redirect) HttpContext.Current.Response.Redirect((new UrlFactory(false)).GetHomeUrl(), true); } public void Login(string email, string password, bool autoLogin) { Logout(false); email = email.Trim().ToLower(); password = password.Trim(); int status = 1; LoginAttemptLog log = new LoginAttemptLog { AutoLogin = autoLogin, Email = email, Password = password }; try { if (string.IsNullOrEmpty(email)) throw new EmptyEmailException(); if (string.IsNullOrEmpty(password)) throw new EmptyPasswordException(); ClientUser clientUser = ClientUserMethods.GetByEmailExcludingProspects(email); if (clientUser == null) throw new WrongEmailException(); if (!clientUser.Password.Equals(password)) throw new WrongPasswordException(); Client client = clientUser.Client; if (!(bool)client.PreRegCheck) throw new EmailNotValidatedException(); if (!(bool)client.Active || client.DeleteFlag.Equals("y")) throw new InactiveAccountException(); FormsAuthentication.SetAuthCookie(client.Id.ToString(), true); HttpContext.Current.Session["ClientId"] = client.Id; log.KeyId = client.Id; log.KeyEntityId = ClientMethods.GetEntityId(client.ClientTypeId); } catch (AuthException ax) { status = ax.StatusCode; log.Success = status == 1; log.Status = status; } finally { LogRecorder.Record(log); } } } }

    Read the article

  • Struts and logging HTTP POST request body

    - by Ivan Vrtaric
    I'm trying to log the raw body of HTTP POST requests in our application based on Struts, running on Tomcat 6. I've found one previous post on SO that was somewhat helpful, but the accepted solution doesn't work properly in my case. The problem is, I want to log the POST body only in certain cases, and let Struts parse the parameters from the body after logging. Currently, in the Filter I wrote I can read and log the body from the HttpServletRequestWrapper object, but after that Struts can't find any parameters to parse, so the DispatchAction call (which depends on one of the parameters from the request) fails. I did some digging through Struts and Tomcat source code, and found that it doesn't matter if I store the POST body into a byte array, and expose a Stream and a Reader based on that array; when the parameters need to get parsed, Tomcat's Request object accesses its internal InputStream, which has already been read by that time. Does anyone have an idea how to implement this kind of logging correctly?

    Read the article

  • Paging over a lazy-loaded collection with NHibernate

    - by HackedByChinese
    I read this article where Ayende states NHibernate can (compared to EF 4): Collection with lazy=”extra” – Lazy extra means that NHibernate adapts to the operations that you might run on top of your collections. That means that blog.Posts.Count will not force a load of the entire collection, but rather would create a “select count(*) from Posts where BlogId = 1” statement, and that blog.Posts.Contains() will likewise result in a single query rather than paying the price of loading the entire collection to memory. Collection filters and paged collections - this allows you to define additional filters (including paging!) on top of your entities collections, which means that you can easily page through the blog.Posts collection, and not have to load the entire thing into memory. So I decided to put together a test case. I created the cliché Blog model as a simple demonstration, with two classes as follows: public class Blog { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual ICollection<Post> Posts { get; private set; } public virtual void AddPost(Post item) { if (Posts == null) Posts = new List<Post>(); if (!Posts.Contains(item)) Posts.Add(item); } } public class Post { public virtual int Id { get; private set; } public virtual string Title { get; set; } public virtual string Body { get; set; } public virtual Blog Blog { get; private set; } } My mappings files look like this: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Blogs"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Name" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Name" /> </property> <property name="Type" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Type" /> </property> <bag lazy="extra" name="Posts"> <key> <column name="Blog_Id" /> </key> <one-to-many class="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Posts"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Title" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Title" /> </property> <property name="Body" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Body" /> </property> <many-to-one class="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Blog"> <column name="Blog_id" /> </many-to-one> </class> </hibernate-mapping> My test case looks something like this: using (ISession session = Configuration.Current.CreateSession()) // this class returns a custom ISession that represents either EF4 or NHibernate { blogs = (from b in session.Linq<Blog>() where b.Name.Contains("Test") orderby b.Id select b); Console.WriteLine("# of Blogs containing 'Test': {0}", blogs.Count()); Console.WriteLine("Viewing the first 5 matching Blogs."); foreach (Blog b in blogs.Skip(0).Take(5)) { Console.WriteLine("Blog #{0} \"{1}\" has {2} Posts.", b.Id, b.Name, b.Posts.Count); Console.WriteLine("Viewing first 5 matching Posts."); foreach (Post p in b.Posts.Skip(0).Take(5)) { Console.WriteLine("Post #{0} \"{1}\" \"{2}\"", p.Id, p.Title, p.Body); } } } Using lazy="extra", the call to b.Posts.Count does do a SELECT COUNT(Id)... which is great. However, b.Posts.Skip(0).Take(5) just grabs all Posts for Blog.Id = ?id, and then LINQ on the application side is just taking the first 5 from the resulting collection. What gives?

    Read the article

  • Configurable ruby logger setup: Logger.new().level = variable

    - by Daniel
    Hi, I want to change the logging level of an application (ruby). require 'logger' config = { :level => 'Logger::WARN' } log = Logger.new STDOUT log.level = Kernel.const_get config[:level] Well, the irb wasn't happy with that and threw "NameError: wrong constant name Logger::WARN" in my face. Ugh! I was insulted. I could do this in a case/when to solve this, or do log.level = 1, but there must be a more elegant way! Does anyone have any ideas? -daniel

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >