Search Results

Search found 18319 results on 733 pages for 'reference parameters'.

Page 142/733 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • XMLHttpRequest() and outputting csv file

    - by sjw
    Initially, I developed a javascript function to use window.open to post contents of a form to a new window which simply opened the new window and initiated a csv file download. Now on reflection, I find the opening of the window superfluous and am trying to just execute a XMLHttpRequest() to download the csv. I am not getting what I want and I'm not 100% sure that I can so I thought I'd ask here for some assistance. When a form is submitted, I want to take all the form values and post them to another page which builds an SQL string and builds a csv based on the contents. (This I can do and works fine with XMLHttpRequest() as seen below) var xhReq = new XMLHttpRequest(); var parameters = ""; for ( i=0; i<formObj.elements.length; i++ ) { parameters += formObj.elements[i].name + "=" + encodeURI( formObj.elements[i].value ) + "&"; } xhReq.open("POST", outputLocation, false); xhReq.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xhReq.setRequestHeader("Content-length", parameters.length); xhReq.setRequestHeader("Connection", "close"); xhReq.send(parameters); document.write(xhReq.responseText); The code above calls the page ok, builds the csv ok, but it outputs the contents into the current browser window instead of initiating a csv file download. Can I achieve what I need using XMLHttpRequest() or am I going it about it the wrong way? Thanks

    Read the article

  • How do I access Dictionary items?

    - by salvationishere
    I am developing a C# VS2008 / SQL Server website app and am new to the Dictionary class. Can you please advise on best method of accomplishing this? Here is a code snippet: SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataCT"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter p1, p2, p3; foreach (string s in dt.Rows[1].ItemArray) { DataRow dr = dt.Rows[1]; // second row p1 = cmd.Parameters.AddWithValue((string)dic[0], (string)dr[0]); p1.SqlDbType = SqlDbType.VarChar; p2 = cmd.Parameters.AddWithValue((string)dic[1], (string)dr[1]); p2.SqlDbType = SqlDbType.VarChar; p3 = cmd.Parameters.AddWithValue((string)dic[2], (string)dr[2]); p3.SqlDbType = SqlDbType.VarChar; } but this is giving me compiler error: The best overloaded method match for 'System.Collections.Generic.Dictionary<string,string>.this[string]' has some invalid arguments I just want to access each value from "dic" and load into these SQL parameters. How do I do this? Do I have to enter the key? The keys are named "col1", "col2", etc., so not the most user-friendly. Any other tips? Thanks!

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • Dealing with Expression Blend's lack of support for C++/CLI projects

    - by Brian Ensink
    I have a WPF C# project that references a C++/CLI mixed mode project. I'm having trouble using the WPF project in Expression Blend 3. I'm new to Blend so perhaps this is obvious, but it won't display the xaml designer properly until it builds the project. In my case it complains that my custom commands are not "recognized or accessible" and the solution is to build the project in Blend. But I can't build the project because it references a C++/CLI mixed mode project which Blend won't load. The WPF project is pure C# it just happens to reference a C++/CLI mixed mode project but I'm not asking Blend to do anything with the mixed-mode assembly. How can I work around this problem? Edit: I was able to get it to build by removing the reference to the C++/CLI mixed mode project and replacing it with a reference to the actual assembly. However this is not ideal because in my past experience Visual Studio will not always be able to resolve the reference when switching between release and debug configurations.

    Read the article

  • Is there "good" PRNG generating values without hidden state?

    - by actual
    I need some good pseudo random number generator that can be computed like a pure function from its previous output without any state hiding. Under "good" I mean: I must be able to parametrize generator in such way that running it for 2^n iterations with any parameters should cover all or almost all values between 0 and 2^n - 1, where n is the number of bits in output value. Combined generator output of n + p bits must cover all or almost all values between 0 and 2^(n + p) - 1 if I run it for 2^n iterations for every possible combination of its parameters, where p is the number of bits in parameters. For example, LCG can be computed like a pure function and it can meet first condition, but it can not meet second one. Say, we have 32-bit generator, m = 2^32 and it is constant, our p = 64 (two 32-bit parameters a and c), n + p = 96, so we must peek data by three ints from output to meet second condition. Unfortunately, condition can not be meet because of strictly alternating sequence of odd and even ints in output. To overcome this, hidden state must be introduced, but that makes function not pure and breaks first condition (period become much longer). Am I wanting too much?

    Read the article

  • What's a good way to parameterize "static" content (e.g. CSS) in a Tomcat webapp?

    - by Steven Huwig
    Some of our CSS files contain parameters that can vary based on the deployment location (dev, QA, prod). For example: background: url(#DOJO_PATH#/dijit/themes...) to avoid hardcoding a path to a particular CDN or locally-hosted Dojo installation. These values are textually substituted with the real values by a deployment script, when it copies the contents of the webapp into the Tomcat webapps directory. That way the same deployment archive file (WAR + TAR file containing other configuration) can be deployed to dev, QA, and prod, with the varying parameters provided by environment-specific configuration files. However, I'd like to make the contents of the WAR (including the templatized CSS files) independent of this in-house deployment script. Since we don't really have control over the deployment script, all I can think to do is configure Tomcat with #DOJO_PATH# etc. as environment variables in the application's context.xml, and use Tomcat to insert those parameters into the CSS at runtime. I could make the CSS files into generated JSPs, but it seems a little ugly to me. Moreover, the substitution only needs to be done once per application deployment, so repeatedly dynamically generating the stylesheets using JSP will be rather wasteful. Does anyone have any alternative ideas or tools to use for this? We're committed to Tomcat and to substituting these parameters at deployment or at runtime (that is, not at build time).

    Read the article

  • C++ struct, public data members and inheritance

    - by Marius
    Is it ok to have public data members in a C++ class/struct in certain particular situations? How would that go along with inheritance? I've read opinions on the matter, some stated already here http://stackoverflow.com/questions/952907/practices-on-when-to-implement-accessors-on-private-member-variables-rather-than http://stackoverflow.com/questions/670958/accessors-vs-public-members or in books/articles (Stroustrup, Meyers) but I'm still a little bit in the shade. I have some configuration blocks that I read from a file (integers, bools, floats) and I need to place them into a structure for later use. I don't want to expose these externally just use them inside another class (I actually do want to pass these config parameters to another class but don't want to expose them through a public API). The fact is that I have many such config parameters (15 or so) and writing getters and setters seems an unnecessary overhead. Also I have more than one configuration block and these are sharing some of the parameters. Making a struct with all the data members public and then subclassing does not feel right. What's the best way to tackle that situation? Does making a big struct to cover all parameters provide an acceptable compromise (I would have to leave some of these set to their default values for blocks that do not use them)?

    Read the article

  • Constructor on type: "Namespace.type" not found.

    - by Nick
    Hello, I am using Castle.Windsor as an IOC. So I am trying to resolve a service type in the constructor of an HTTPHandler. I keep receiving this error, "Constructor on type: "Namespace.type" not found." My configuration has the following entries for service type: IDocumentDirectory <component id="restricted.content.directory" service="org.healthwise.foundations.services.content.IDocumentDirectory, org.healthwise.foundations.services" type="org.healthwise.foundations.services.content.RestrictedLocalizationDocumentDirectory, org.healthwise.foundations.services"> <parameters> <contentDirectory>${content.directory}</contentDirectory> <localizations> <array> <item>en-us</item> <item>es-us</item> </array> </localizations> </parameters> </component> <component id="content.directory" service="org.healthwise.foundations.services.content.IDocumentDirectory, org.healthwise.foundations.services" type="org.healthwise.foundations.services.web.client.WebServiceDocumentDirectory, org.healthwise.foundations.services.web.client"> <parameters> <webServiceURL>#{contentDirectoryWebsiteUrl}</webServiceURL> </parameters> </component> In my new handler the constructor looks like this: public HeartBeatHttpHandler(IDocumentDirectory contentDirectory) { _contentDirectory = contentDirectory; } I have never recieved this error using Castle.Windsor. Can someone explain? Thanks!

    Read the article

  • Call Oracle package function using Odbc from C#

    - by Paolo Tedesco
    I have a function defined inside an Oracle package: CREATE OR REPLACE PACKAGE BODY TESTUSER.TESTPKG as FUNCTION testfunc(n IN NUMBER) RETURN NUMBER as begin return n + 1; end testfunc; end testpkg; / How can I call it from C# using Odbc? I tried the following: using System; using System.Data; using System.Data.Odbc; class Program { static void Main(string[] args) { using (OdbcConnection connection = new OdbcConnection("DSN=testdb;UID=testuser;PWD=testpwd")) { connection.Open(); OdbcCommand command = new OdbcCommand("TESTUSER.TESTPKG.testfunc", connection); command.CommandType = System.Data.CommandType.StoredProcedure; command.Parameters.Add("ret", OdbcType.Int).Direction = ParameterDirection.ReturnValue; command.Parameters.Add("n", OdbcType.Int).Direction = ParameterDirection.Input; command.Parameters["n"].Value = 42; command.ExecuteNonQuery(); Console.WriteLine(command.Parameters["ret"].Value); } } } But I get an exception saying "Invalid SQL Statement". What am I doing wrong?

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • Unit testing a method with many possible outcomes

    - by Cthulhu
    I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as: address address port address port path address path address action address path action address port action address port path action address action params address path action params address port action params address port path action params andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output. However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this? (rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant) (note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

    Read the article

  • How to convert a lambda to an std::function using templates

    - by retep998
    Basically, what I want to be able to do is take a lambda with any number of any type of parameters and convert it to an std::function. I've tried the following and neither method works. std::function([](){});//Complains that std::function is missing template parameters template <typename T> void foo(function<T> f){} foo([](){});//Complains that it cannot find a matching candidate The following code does work however, but it is not what I want because it requires explicitly stating the template parameters which does not work for generic code. std::function<void()>([](){}); I've been mucking around with functions and templates all evening and I just can't figure this out, so any help would be much appreciated. As mentioned in a comment, the reason I'm trying to do this is because I'm trying to implement currying in C++ using variadic templates. Unfortunately, this fails horribly when using lambdas. For example, I can pass a standard function using a function pointer. template <typename R, typename...A> void foo(R (*f)(A...)) {} void bar() {} int main() { foo(bar); } However, I can't figure out how to pass a lambda to such a variadic function. Why I'm interested in converting a generic lambda into an std::function is because I can do the following, but it ends up requiring that I explicitly state the template parameters to std::function which is what I am trying to avoid. template <typename R, typename...A> void foo(std::function<R(A...)>) {} int main() { foo(std::function<void()>([](){})); }

    Read the article

  • What rules govern cross-version compatibility for .NET applications and the C# language?

    - by John Feminella
    For some reason I've always had trouble remembering the backwards/forwards compatibility guarantees made by the framework, so I'd like to put that to bed forever. Suppose I have two assemblies, A and B. A is older and references .NET 2.0 assemblies; B references .NET 3.5 assemblies. I have the source for A and B, Ax and Bx, respectively; they are written in C# at the 2.0 and 3.0 language levels. (That is, Ax uses no features that were introduced later than C# 2.0; likewise Bx uses no features that were introduced later than 3.0.) I have two environments, C and D. C has the .NET 2.0 framework installed; D has the .NET 3.5 framework installed. Now, which of the following can/can't I do? Running: run A on C? run A on D? run B on C? run C on D? Compiling: compile Ax on C? compile Ax on D? compile Bx on C? compile Bx on D? Rewriting: rewrite Ax to use features from the C# 3 language level, and compile it on D, while having it still work on C? rewrite Bx to use features from the C# 4 language level on another environment E that has .NET 4, while having it still work on D?' Referencing from another assembly: reference B from A and have a client app on C use it? reference B from A and have a client app on D use it? reference A from B and have a client app on C use it? reference A from B and have a client app on D use it? More importantly, what rules govern the truth or falsity of these hypothetical scenarios?

    Read the article

  • Question about architecting asp.net mvc application ?

    - by Misnomer
    I have read little bit about architecture and also patterns in order to follow the best practices. So this is the architecture we have and I wanted to know what you think of it and any proposed changes or improvements to it - Presentation Layer - Contains all the views,controllers and any helper classes that the view requires also it containes the reference to Model Layer and Business Layer. Business Project - Contains all the business logic and validation and security helper classes that are being used by it. It contains a reference to DataAccess Layer and Model Layer. Data Access Layer - Contains the actual queries being made on the entity classes(CRUD) operations on the entity classes. It contains reference to Model Layer. Model Layer - Contains the entity framework model,DTOs,Enums.Does not really have a reference to any of the above layers. What are your thoughts on the above architecture ? The problem is that I am getting confused by reading about like say the repository pattern, domain driven design and other design patterns. The architecture we have although not that strict still is relatively alright I think and does not really muddle things but I maybe wrong. I would appreciate any help or suggestions here. Thanks !

    Read the article

  • What access token should i use?

    - by user548458
    I made a posting scores. It worked normaly sometimes. But, now I cant post any score. If I use user accessToken: string path = meID + "/scores"; var parameters = new Dictionary<string, object> (){ { "score", score.ToString () }, { "access_token", accessToken } }; facebook.post (path, parameters, ( error, obj ) => I get error: {"error":{"message":"(#15) This method must be called with an app access_token.", "type":"OAuthException", "code":15}} If I use an app access token: string path = meID + "/scores"; var parameters = new Dictionary<string, object> () { { "score", score.ToString () }, { "access_token", appAccessToken } }; facebook.post (path, parameters, ( error, obj ) => I get other error: {"error":{"message":"A user access token is required to request this resource.", "type":"OAuthException", "code":102}} Help me please, what am i doing wrong? PS: I worked well recently, but now - dont. Cant explain it...

    Read the article

  • Passing Data to Multi Threads

    - by alaamh
    I study this code from some book: #include <pthread.h> #include <stdio.h> /* Parameters to print_function. */ struct char_print_parms { /* The character to print. */ char character; /* The number of times to print it. */ int count; }; /* Prints a number of characters to stderr, as given by PARAMETERS, which is a pointer to a struct char_print_parms. */ void* char_print(void* parameters) { /* Cast the cookie pointer to the right type. */ struct char_print_parms* p = (struct char_print_parms*) parameters; int i; for (i = 0; i < p->count; ++i) fputc(p->character, stderr); return NULL; } /* The main program. */ int main() { pthread_t thread1_id; pthread_t thread2_id; struct char_print_parms thread1_args; struct char_print_parms thread2_args; /* Create a new thread to print 30,000 ’x’s. */ thread1_args.character = 'x'; thread1_args.count = 30000; pthread_create(&thread1_id, NULL, &char_print, &thread1_args); /* Create a new thread to print 20,000 o’s. */ thread2_args.character = 'o'; thread2_args.count = 20000; pthread_create(&thread2_id, NULL, &char_print, &thread2_args); usleep(20); return 0; } after running this code, I see different result each time. and some time corrupted result. what is wrong and what the correct way to do that?

    Read the article

  • Template neglects const (why?)

    - by Gabriel
    Does somebody know, why this compiles?? template< typename TBufferTypeFront, typename TBufferTypeBack = TBufferTypeFront> class FrontBackBuffer{ public: FrontBackBuffer( const TBufferTypeFront front, const TBufferTypeBack back): ////const reference assigned to reference??? m_Front(front), m_Back(back) { }; ~FrontBackBuffer() {}; TBufferTypeFront m_Front; ///< The front buffer TBufferTypeBack m_Back; ///< The back buffer }; int main(){ int b; int a; FrontBackBuffer<int&,int&> buffer(a,b); // buffer.m_Back = 33; buffer.m_Front = 55; } I compile with GCC 4.4. Why does it even let me compile this? Shouldn't there be an error that I cannot assign a const reference to a non-const reference?

    Read the article

  • Satisfying indirect references at runtime.

    - by automatic
    I'm using C# and VS2010. I have a dll that I reference in my project (as a dll reference not a project reference). That dll (a.dll) references another dll that my project doesn't directly use, let's call it b.dll. None of these are in the GAC. My project compiles fine, but when I run it I get an exception that b.dll can't be found. It's not being copied to the bin directory when my project is compiled. What is the best way to get b.dll into the bin directory so that it can be found at run time. I've thought of four options. Use a post compile step to copy b.dll to the bin directory Add b.dll to my project (as a file) and specify copy to output directory if newer Add b.dll as a dll reference to my project. Use ILMerge to combine b.dll with a.dll I don't like 3 at all because it makes b.dll visible to my project, the other two seem like hacks. Am I missing other solutions? Which is the "right" way? Would a dependency injection framework be able to resolve and load b.dll?

    Read the article

  • Is MySQL caching occurring, how to fix it?

    - by rlb.usa
    I think that MySQL or ASP.NET is caching my queries. I edited my MySQL sproc to remove some parameters but it keeps saying that those parameters are missing. Here is what happens: ASP.NET app calls a MySQL stored procedure. Everything works perfect. I delete some parameters from the sproc and ASP.NET parameter list accordingly. All parameters exactly match in case and order from the new ASP.NET and MySQL sproc code Upon execution, it fails, saying : System.ArgumentException: Parameter 'deleted_parameter_foo_bar' not found in the collection. at MySql.Data.MySqlClient.MySqlParameterCollection ... I delete the sproc from the database, restart my browser, and reexecute the ASP.NET page. It says the same error, that the parameter is missing - but the sproc itself doesn't exist anymore. ( I know 100% that I am editing/deleting from the right database. ) How do I fix this or make it work again; I want it to use my new sproc instead of the old one ? _o

    Read the article

  • "Syntax error in INSERT INTO statement". Why?

    - by Kevin
    My code is below. I have a method where I pass in three parameters and they get written out to an MS Access database table. However, I keep getting a syntax error message. Can anyone tell me why? I got this example from the internet. private static void insertRecord(string day, int hour, int loadKW) { string connString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\LoadForecastDB.accdb"; OleDbConnection conn = new OleDbConnection(connString); string ins = @"INSERT INTO Forecasts (Day, Hour, Load) VALUES (?,?,?)"; OleDbCommand cmd = new OleDbCommand(ins, conn); cmd.Parameters.Add("@day", OleDbType.VarChar).Value = day; cmd.Parameters.Add("@hour", OleDbType.Integer).Value = hour; cmd.Parameters.Add("@load", OleDbType.Integer).Value = loadKW; conn.Open(); try { int count = cmd.ExecuteNonQuery(); } catch (OleDbException ex) { Console.WriteLine(ex.Message); } finally { conn.Close(); } }

    Read the article

  • Programming Practice

    - by deepti
    public DataTable UserUpdateTempSettings(int install_id, int install_map_id, string Setting_value,string LogFile) { SqlConnection oConnection = new SqlConnection(sConnectionString); DataSet oDataset = new DataSet(); DataTable oDatatable = new DataTable(); SqlDataAdapter MyDataAdapter = new SqlDataAdapter(); try { oConnection.Open(); cmd = new SqlCommand("SP_HOTDOC_PRINTTEMPLATE_PERMISSION", oConnection); cmd.Parameters.Add(new SqlParameter ("@INSTALL_ID", install_id)); cmd.Parameters.Add(new SqlParameter ("@INSTALL_MAP_ID", install_map_id)); cmd.Parameters.Add(new SqlParameter("@SETTING_VALUE", Setting_value)); if (LogFile != "") { cmd.Parameters.Add(new SqlParameter("@LOGFILE",LogFile)); } cmd.CommandType = CommandType.StoredProcedure; MyDataAdapter.SelectCommand = cmd; cmd.ExecuteNonQuery(); MyDataAdapter.Fill(oDataset); oDatatable = oDataset.Tables[0]; return oDatatable; } catch (Exception ex) { Utils.ShowError(ex.Message); return oDatatable; } finally { if ((oConnection.State != ConnectionState.Closed) || (oConnection.State != ConnectionState.Broken)) { oConnection.Close(); } oDataset = null; oDatatable = null; oConnection.Dispose(); oConnection = null; } } i have used execute non query.. normally its not used with data adapter... if iam not using its giving me error.. is it bad programming practice to use execute non query with data adapter

    Read the article

  • NLog Exception Details Renderer

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/nlog-exception-details-renderer.aspxI recently switch from Microsoft's Enterprise Library Logging block to NLog.  In my opinion, NLog offers a simpler and much cleaner configuration section with better use of placeholders, complemented by custom variables. Despite this, I found one deficiency in my migration; I had lost the ability to simply render all details of an exception into our logs and notification emails. This is easily remedied by implementing a custom layout renderer. Start by extending 'NLog.LayoutRenderers.LayoutRenderer' and overriding the 'Append' method. using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails";   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { // Todo: Append details to StringBuilder } }   Now that we have a base layout renderer, we simply need to add the formatting logic to add exception details as well as inner exception details. This is done using reflection with some simple filtering for the properties that are already being rendered. I have added an additional 'Register' method, allowing the definition to be registered in code, rather than in configuration files. This complements by 'LogWrapper' class which standardizes writing log entries throughout my applications. using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public sealed class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails"; private const string _Spacer = "======================================"; private List<string> _FilteredProperties;   private List<string> FilteredProperties { get { if (_FilteredProperties == null) { _FilteredProperties = new List<string> { "StackTrace", "HResult", "InnerException", "Data" }; }   return _FilteredProperties; } }   public bool LogNulls { get; set; }   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { Append(builder, logEvent.Exception, false); }   private void Append(StringBuilder builder, Exception exception, bool isInnerException) { if (exception == null) { return; }   builder.AppendLine();   var type = exception.GetType(); if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception Details:") .AppendLine(_Spacer) .Append("Exception Type: ") .AppendLine(type.ToString());   var bindingFlags = BindingFlags.Instance | BindingFlags.Public; var properties = type.GetProperties(bindingFlags); foreach (var property in properties) { var propertyName = property.Name; var isFiltered = FilteredProperties.Any(filter => String.Equals(propertyName, filter, StringComparison.InvariantCultureIgnoreCase)); if (isFiltered) { continue; }   var propertyValue = property.GetValue(exception, bindingFlags, null, null, null); if (propertyValue == null && !LogNulls) { continue; }   var valueText = propertyValue != null ? propertyValue.ToString() : "NULL"; builder.Append(propertyName) .Append(": ") .AppendLine(valueText); }   AppendStackTrace(builder, exception.StackTrace, isInnerException); Append(builder, exception.InnerException, true); }   private void AppendStackTrace(StringBuilder builder, string stackTrace, bool isInnerException) { if (String.IsNullOrEmpty(stackTrace)) { return; }   builder.AppendLine();   if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception StackTrace:") .AppendLine(_Spacer) .AppendLine(stackTrace); }   public static void Register() { Type definitionType; var layoutRenderers = ConfigurationItemFactory.Default.LayoutRenderers; if (layoutRenderers.TryGetDefinition(Name, out definitionType)) { return; }   layoutRenderers.RegisterDefinition(Name, typeof(ExceptionDetailsRenderer)); LogManager.ReconfigExistingLoggers(); } } For brevity I have removed the Trace, Debug, Warn, and Fatal methods. They are modelled after the Info methods. As mentioned above, note how the log wrapper automatically registers our custom layout renderer reducing the amount of application configuration required. using System; using NLog;   public static class LogWrapper { static LogWrapper() { ExceptionDetailsRenderer.Register(); }   #region Log Methods   public static void Info(object toLog) { Log(toLog, LogLevel.Info); }   public static void Info(string messageFormat, params object[] parameters) { Log(messageFormat, parameters, LogLevel.Info); }   public static void Error(object toLog) { Log(toLog, LogLevel.Error); }   public static void Error(string message, Exception exception) { Log(message, exception, LogLevel.Error); }   private static void Log(string messageFormat, object[] parameters, LogLevel logLevel) { string message = parameters.Length == 0 ? messageFormat : string.Format(messageFormat, parameters); Log(message, (Exception)null, logLevel); }   private static void Log(object toLog, LogLevel logLevel, LogType logType = LogType.General) { if (toLog == null) { throw new ArgumentNullException("toLog"); }   if (toLog is Exception) { var exception = toLog as Exception; Log(exception.Message, exception, logLevel, logType); } else { var message = toLog.ToString(); Log(message, null, logLevel, logType); } }   private static void Log(string message, Exception exception, LogLevel logLevel, LogType logType = LogType.General) { if (exception == null && String.IsNullOrEmpty(message)) { return; }   var logger = GetLogger(logType); // Note: Using the default constructor doesn't set the current date/time var logInfo = new LogEventInfo(logLevel, logger.Name, message); logInfo.Exception = exception; logger.Log(logInfo); }   private static Logger GetLogger(LogType logType) { var loggerName = logType.ToString(); return LogManager.GetLogger(loggerName); }   #endregion   #region LogType private enum LogType { General } #endregion } The following configuration is similar to what is provided for each of my applications. The 'application' variable is all that differentiates the various applications in all of my environments, the rest has been standardized. Depending on your needs to tweak this configuration while developing and debugging, this section could easily be pushed back into code similar to the registering of our custom layout renderer.   <?xml version="1.0"?>   <configuration> <configSections> <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/> </configSections> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <variable name="application" value="Example"/> <targets> <target type="EventLog" name="EventLog" source="${application}" log="${application}" layout="${message}${onexception: ${newline}${exceptiondetails}}"/> <target type="Mail" name="Email" smtpServer="smtp.example.local" from="[email protected]" to="[email protected]" subject="(${machinename}) ${application}: ${level}" body="Machine: ${machinename}${newline}Timestamp: ${longdate}${newline}Level: ${level}${newline}Message: ${message}${onexception: ${newline}${exceptiondetails}}"/> </targets> <rules> <logger name="*" minlevel="Debug" writeTo="EventLog" /> <logger name="*" minlevel="Error" writeTo="Email" /> </rules> </nlog> </configuration>   Now go forward, create your custom exceptions without concern for including their custom properties in your exception logs and notifications.

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • LibPCL issues on Ubuntu 13.10

    - by user254885
    i wanted to install the Point Cloud Library but it does not work i use an ODROID board(ARM processor) Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libpcl-all : Depends: libpcl-1.7-all but it is not going to be installed E: Unable to correct problems, you have held broken packages. by compiling v1.7 , i get these errors : /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__fcntl_nocancel': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:37: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__libc_fcntl': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:53: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:57: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-open64.o): In function `__libc_open64': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:41: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:45: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(cancellation.o):/build/buildd/eglibc-2.17/nptl/cancellation.c:96: more undefined references to `__libc_do_syscall' follow collect2: error: ld returned 1 exit status make[2]: *** [bin/pcl_convert_pcd_ascii_binary] Error 1 make[1]: *** [io/tools/CMakeFiles/pcl_convert_pcd_ascii_binary.dir/all] Error 2 make: *** [all] Error 2 i could not find anything in google to solve these errors i believe some packages were not ported for ARM processors any help would be appreciated $ dpkg --list | grep headers ii linux-headers-3.0.63-odroidx2 20130215 ii linux-headers-3.0.71-odroidx2 20130415 ii linux-headers-3.0.74-odroidx2 20130417 ii linux-headers-3.0.75-odroidx2 20130426 ii linux-headers-3.1.10-6 3.1.10-6.10 ii linux-headers-3.1.10-6-ac100 3.1.10-6.10 ii linux-headers-ac100 3.1.10.6.2 installing packages did'nt do well sudo apt-get install linux-generic [sudo] password for odroid: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: debugedit libasound2-dev libestools2.1-dev librpmbuild3 librpmsign1 thunderbird-locale-en thunderbird-locale-en-gb thunderbird-locale-en-us thunderbird-locale-ko Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic Suggested packages: fdutils linux-doc-3.11.0 linux-source-3.11.0 linux-tools The following NEW packages will be installed: linux-generic linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 58.2 MB of archives. After this operation, 203 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-3.11.0-17-generic armhf 3.11.0-17.31 [44.5 MB] Get:2 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-generic armhf 3.11.0.17.18 [2,356 B] Get:3 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17 all 3.11.0-17.31 [12.6 MB] Get:4 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17-generic armhf 3.11.0-17.31 [1,128 kB] Get:5 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-generic armhf 3.11.0.17.18 [2,350 B] Get:6 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-generic armhf 3.11.0.17.18 [1,726 B] Fetched 58.2 MB in 13s (4,379 kB/s) Selecting previously unselected package linux-image-3.11.0-17-generic. (Reading database ... 258618 files and directories currently installed.) Unpacking linux-image-3.11.0-17-generic (from .../linux-image-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Examining /etc/kernel/preinst.d/ Done. Selecting previously unselected package linux-image-generic. Unpacking linux-image-generic (from .../linux-image-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-headers-3.11.0-17. Unpacking linux-headers-3.11.0-17 (from .../linux-headers-3.11.0-17_3.11.0-17.31_all.deb) ... Selecting previously unselected package linux-headers-3.11.0-17-generic. Unpacking linux-headers-3.11.0-17-generic (from .../linux-headers-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Selecting previously unselected package linux-headers-generic. Unpacking linux-headers-generic (from .../linux-headers-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-generic. Unpacking linux-generic (from .../linux-generic_3.11.0.17.18_armhf.deb) ... Setting up linux-image-3.11.0-17-generic (3.11.0-17.31) ... Running depmod. update-initramfs: deferring update (hook will be called later) cp: cannot stat ‘/boot/initrd.img-3.11.0-17-generic’: No such file or directory Failed to copy /boot/initrd.img-3.11.0-17-generic to /boot/initrd.img at /var/lib/dpkg/info/linux-image-3.11.0-17-generic.postinst line 730. dpkg: error processing linux-image-3.11.0-17-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.11.0-17-generic; however: Package linux-image-3.11.0-17-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured Setting up linux-headers-3.11.0-17 (3.11.0-17.31) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up linux-headers-3.11.0-17-generic (3.11.0-17.31) ... Examining /etc/kernel/header_postinst.d. Setting up linux-headers-generic (3.11.0.17.18) ... dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.11.0.17.18); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.11.0-17-generic linux-image-generic linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1) i had to uninstall these cos they were messing up other packages installation(buildessentials were already installed)

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >