Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 654/854 | < Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >

  • Shell script to emulate warnings-as-errors?

    - by talkaboutquality
    Some compilers let you set warnings as errors, so that you'll never leave any compiler warnings behind, because if you do, the code won't build. This is a Good Thing. Unfortunately, some compilers don't have a flag for warnings-as-errors. I need to write a shell script or wrapper that provides the feature. Presumably it parses the compilation console output and returns failure if there were any compiler warnings (or errors), and success otherwise. "Failure" also means (I think) that object code should not be produced. What's the shortest, simplest UNIX/Linux shell script you can write that meets the explicit requirements above, as well as the following implicit requirements of otherwise behaving just like the compiler: - accepts all flags, options, arguments - supports redirection of stdout and stderr - produces object code and links as directed Key words: elegant, meets all requirements. Extra credit: easy to incorporate into a GNU make file. Thanks for your help. === Clues === This solution to a different problem, using shell functions (?), Append text to stderr redirects in bash, might figure in. Wonder how to invite litb's friend "who knows bash quite well" to address my question? === Answer status === Thanks to Charlie Martin for the short answer, but that, unfortunately, is what I started out with. A while back I used that, released it for office use, and, within a few hours, had its most severe drawback pointed out to me: it will PASS a compilation with no warnings, but only errors. That's really bad because then we're delivering object code that the compiler is sure won't work. The simple solution also doesn't meet the other requirements listed. Thanks to Adam Rosenfield for the shorthand, and Chris Dodd for introducing pipefail to the solution. Chris' answer looks closest, because I think the pipefail should ensure that if compilation actually fails on error, that we'll get failure as we should. Chris, does pipefail work in all shells? And have any ideas on the rest of the implicit requirements listed above?

    Read the article

  • CentOS - Convert Each WAV File to MP3/OGG

    - by Benny
    I am trying to build a script (I'm pretty new to linux scripting) and I can't seem to figure out why I'm not able to run this script. If I keep the header (#!/bin/sh) in, I get the following: -bash: /tmp/ConvertAndUpdate.sh: /bin/sh^M: bad interpreter: No such file or directory If I take it out, I get the following: 'tmp/ConvertAndUpdate.sh: line 2: syntax error near unexpected token `do 'tmp/ConvertAndUpdate.sh: line 2: `do Any ideas? Here is the full script: #!/bin/sh for file in *.wav; do mp3=$(basename .$file. .wav).mp3; #echo $mp3 nice lame -b 16 -m m -q 9 .resample 8 .$file. .$mp3.; touch .reference .$file. .$mp3.; chown apache.apache .$mp3.; chmod 600 .$mp3.; rm -f .$file.; mv .$file. /converted; sql="UPDATE recordings SET IsReady=1 WHERE Filename='${file%.*}'" echo $sql | mysql --user=me --password=pasword Recordings #echo $sql done

    Read the article

  • Problem with XStream Marshalling to return xml and json

    - by MiKu
    When i use new XStream().toXml(someObject); it returns following xml... <response> <status>SUCCESS</status> <isOwnershipVerified class="boolean">false</isOwnershipVerified> </response> and, when i use new XStream(new JsonHierarchicalStreamDriver()).toXml(someObject); it returns following json... {"response": { "status": "SUCCESS", "isOwnershipVerified": { "@class": "boolean""false"} }} Now, since i want to get rid of class attribute altogether (read it not to alias it with anything else, but to remove it) i use following code. XStream xStream = new XStream(); StringWriter writer = new StringWriter(); xStream.marshal(this, new PrettyPrintWriter(writer) { @Override public void addAttribute(final String key, final String value) { if (!key.equals("class")) { super.addAttribute(key, value); } } }); return writer.toString(); which gives follwing xml... <response> <status>SUCCESS</status> <isOwnershipVerified>false</isOwnershipVerified> </response> but, when i pass new JsonHierarchicalStreamDriver() while xStream object creation above, it does NOT return json. it returns the same xml shown above. What is wrong going on here? Thanks in advance...

    Read the article

  • Why doesn't my cursor change to an Hourglass in my FindDialog in Delphi?

    - by lkessler
    I am simply opening my FindDialog with: FindDialog.Execute; In my FindDialog.OnFind event, I want to change the cursor to an hourglass for searches through large files, which may take a few seconds. So in the OnFind event I do this: Screen.Cursor := crHourglass; (code that searches for the text and displays it) ... Screen.Cursor := crDefault; What happens is while searching for the text, the cursor properly changes to the hourglass (or rotating circle in Vista) and then back to the pointer when the search is completed. However, this only happens on the main form. It does not happen on the FindDialog itself. The default cursor remains on the FindDialog during the search. While the search is happening if I move the cursor over the FindDialog it changes to the default, and if I move it off and over the main form it becomes the hourglass. This does not seem like what is supposed to happen. Am I doing something wrong or does something special need to be done to get the cursor to be the hourglass on all forms? For reference, I'm using Delphi 2009.

    Read the article

  • log4net creates log file but does not write to it (windows service in C#)

    - by user1825172
    I am trying to use basic logging for a windows service. I added the reference to log4net I added the following in AssemblyInfo.cs: [assembly: log4net.Config.XmlConfigurator(Watch = true)] I added the following to my App.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net, Version=1.2.10.0, Culture=neutral, PublicKeyToken=1b44e1d426115821" requirePermission="false" /> </configSections> <!-- Log4net Logging Setup --> <log4net> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender,log4net"> <file value="c:\\CGSD\\log\\logfile.txt" /> <appendToFile value="true" /> <lockingModel type="log4net.Appender.FileAppender+MinimalLock" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %level %logger - %message%newline" /> </layout> <filter type="log4net.Filter.LevelRangeFilter"> <levelMin value="INFO" /> <levelMax value="FATAL" /> </filter> </appender> <root> <level value="ALL"/> <appender-ref ref="RollingFileAppender"/> </root> </log4net> </configuration> I have the following code in my service: log4net.Config.XmlConfigurator.Configure(); log4net.ILog log = log4net.LogManager.GetLogger(typeof(Program)); log.Debug("test"); the file c:\CGSD\log\logfile.txt is created but nothing is ever written to it. i've been thru the forums all day trying to trac this one down, but if i overlooked an already posted solution i apologize. thx!

    Read the article

  • Web.Routing for the site root or homepage

    - by Aquinas
    I am doing some work with Web.Routing, using it to have friendly urls and nice Rest like interfaces to a site that is essentially rendered by a single IHttpHandler. There are no webforms, the handler generates all the html/json and writes it as part of process request. This works well for things like /Sites/Accounting for example, but I can't get it to work for the site root, i.e. '/'. I have tried registering a route with an empty string, with 'default.aspx' (which is the empty aspx file I keep in my root folder to play nice with cassini and iis). I set RouteExistingFiles to false explicitly, but whatever I do when hitting the root url it still opens default.axpx, which has no code it inherits from, and contains a simple h1 tag to show that I've hit it. I don't want to change the default file to redirect to a desired route, I just want the equivalent of a 'default' route that is applied when no other routes are found, similar to MVC. For reference, the previous version of the site didn't use Web.Routing, but had a handler referenced in the web.config that was perfectly capable of intercepting requests for the root or default.aspx. Specs: ASP.NET 3.5sp1, C#, no webforms, MVC or openrasta. Plain old IHttpHandlers.

    Read the article

  • Call long running operation in WSS feature OnActivated Event

    - by dirq
    More specifically - How do I reference SPContext in Web Service with [SoapDocumentMethod(OneWay=true)]? We are creating a feature that needs to run a job when a site is created. The job takes about 4 minutes to complete. So, we made a web service that we can call when the feature is activated. This works but we want it to run asynchronously now. We've found the SoapDocumentMethod's OneWay property and that would work awesomely but the SPContext is now NULL. We have our web services in the _vti_bin virtual directory so it's available in each Windows Sharepoint Services site. I was using the SPContext.Current.Web to get the site and perform the long running operation. I wanted to just fire and forget about it by returning a soap response right away and letting the process run. How can I get the current SPContext? I used to be able to do this in my web service: SPWeb mySite = SPContext.Current.Web; Can I get the same context when I have the [SoapDocumentMethod(OneWay=true)] attribute applied to my web service? Or must I recreate the SPWeb from the url? This is similar to this thread: http://stackoverflow.com/questions/340192/webservice-oneway-and-new-spsitemyurl Update: I've tried these two ways but they didn't work: SPWeb targetSite = SPControl.GetContextWeb(this.Context); SPWeb targetSite2 = SPContext.GetContext(this.Context).Web;

    Read the article

  • Silverlight, WCF service, integrated security AND ssl/https not possible?

    - by Flores
    I have this setup that works perfectly when using http. A silverlight 3 client .net 4 WCF service hosted in IIS with basicHttpBinding and using integrated security on the site When setting https to required on the website the setup stops working. Using the wcftestclient on the uri I get the message: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Negotiate,NTLM'. The remote server returned an error: (401) Unauthorized. Maybe this makes sense because the wcftestclient does not pass credentials? in the web.config the security mode for the service binding is set is set to 'Transport'. The silverlight client is created like this: BasicHttpBinding basicHttpBinding = new BasicHttpBinding(); basicHttpBinding.Security.Mode = BasicHttpSecurityMode.Transport; var serviceClient = new ImportServiceClient(basicHttpBinding, serviceAddress); The service address is ofcourse starting with https:// And the silverlight client reports this error: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via Remember, swithing it back to http (and setting security mode to 'TransportCredentialOnly' makes everything working again. Is the setup I want even supported? If so, how should it be configured?

    Read the article

  • Amazon Elastic MapReduce: Exception from FileSystem

    - by S.N
    Hi, I run my application using ruby client: ruby elastic-mapreduce -j j-20PEKMT9BRSUC --jar s3n://sakae55/lib/edu.cit.som.jar --main-class edu.cit.som.hadoop.SOMDriver --arg s3n://sakae55/repository/input/ecoli/ --arg s3n://sakae55/repository/output/ecoli/pl/ --arg s3n://sakae55/repository/data/ecoli/som.txt Then, I am seeing the following error: java.lang.IllegalArgumentException: This file system object (file:///) does not support access to the request path 'hdfs://i -10-195-207-230.ec2.internal:9000/mnt/var/lib/hadoop/tmp/mapred/system/job_201004221221_0017/job.jar' You possibly called Fi eSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path. at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:52) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:416) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:259) at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:676) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:200) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1184) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1160) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1132) at org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:662) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:729) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026) at edu.cit.som.hadoop.SOMDriver.runIteration(SOMDriver.java:106) at edu.cit.som.hadoop.SOMDriver.train(SOMDriver.java:69) at edu.cit.som.hadoop.SOMDriver.run(SOMDriver.java:52) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at edu.cit.som.hadoop.SOMDriver.main(SOMDriver.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:155) at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68) I am not sure why the error references to "file:///" even though all the arguments I pass do not use the schema.

    Read the article

  • Rails, RSpec and Webrat: Expected output matches rendered output but still getting error in view spe

    - by Anthony Burns
    Hello all, I've just gotten started using BDD with RSpec/Cucumber/Webrat and Rails and I've run into some frustration trying to get my view spec to pass. First of all, I am running Ruby 1.9.1p129 with Rails 2.3.2, RSpec and RSpec-Rails 1.2.6, Cucumber 0.3.11, and Webrat 0.4.4. Here is the code relevant to my question config/routes.rb: map.b_posts 'backend/posts', :controller => 'backend/posts', :action => 'backend_index', :conditions => { :method => :get } map.connect 'backend/posts', :controller => 'backend/posts', :action => 'create', :conditions => { :method => :post } views/backend/posts/create.html.erb: <% form_tag do %> <% end %> *spec/views/backend/posts/create.html.erb_spec.rb:* describe "backend/posts/create.html.erb" do it "should render a form to create a post" do render "backend/posts/create.html.erb" response.should have_selector("form", :method => 'post', :action => b_posts_path) do |form| # Nothing here yet. end end end Here is the relevant part of the output when I run script/spec: 'backend/posts/create.html.erb should render a form to create a post' FAILED expected following output to contain a <form method='post' action='/backend/posts'/> tag: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><form action="/backend/posts" method="post"> </form></body></html> It would appear to me that what have_selector is looking for is exactly what the template generates, yet the example still fails. I am very much looking forward to seeing my error (because I have a feeling it is my error). Any help is much appreciated!

    Read the article

  • Using the MVVM Light Toolkit to make Blendable applications

    - by Dave
    A while ago, I posted a question regarding switching between a Blend-authored GUI and a Visual Studio-authored one. I got it to work okay by adding my Blend project to my VS2008 project and then changing the Startup Application and recompiling. This would result in two applications that had completely different GUIs, yet used the exact same ViewModel and Model code. I was pretty happy with that. Now that I've learned about the Laurent Bugnion's MVVM Light Toolkit, I would really like to leverage his efforts to make this process of supporting multiple GUIs for the same backend code possible. The question is, does the toolkit facilate this, or am I stuck doing it my previous way? I've watched his video from MIX10 and have read some of the articles about it online. However, I've yet to see something that indicates that there is a clean way to allow a user to either dynamically switch GUIs on the fly by loading a different DLL. There are MVVM templates for VS2008 and Blend 3, but am I supposed to create both types of projects for my application and then reference specific files from my VS2008 solution? UPDATE I re-read some information on Laurent's site, and seemed to have forgotten that the whole point of the template was to allow the same solution to be opened in VS2008 and Blend. So anyhow, with this new perspective it looks like the templates are actually intended to use a single GUI, most likely designed entirely in Blend (with the convenience of debugging through VS2008), and then be able to use two different ViewModels -- one for design-time, and one for runtime. So it seems to me like the answer to my question is that I want to use a combination of my previous solution, along with the MVVM Light Toolkit. The former will allow me to make multiple, distinct GUIs around my core code, while the latter will make designing fancy GUIs in Blend easier with the usage of a design-time ViewModel. Can anyone comment on this?

    Read the article

  • WCF Double Hop questions about Security and Binding.

    - by Ken Maglio
    Background information: .Net Website which calls a service (aka external service) facade on an app server in the DMZ. This external service then calls the internal service which is on our internal app server. From there that internal service calls a stored procedure (Linq to SQL Classes), and passes the serialized data back though to the external service, and from there back to the website. We've done this so any communication goes through an external layer (our external app server) and allows interoperability; we access our data just like our clients consuming our services. We've gotten to the point in our development where we have completed the system and it all works, the double hop acts as it should. However now we are working on securing the entire process. We are looking at using TransportWithMessageCredentials. We want to have WS2007HttpBinding for the external for interoperability, but then netTCPBinding for the bridge through the firewall for security and speed. Questions: If we choose WS2007HttpBinding as the external services binding, and netTCPBinding for the internal service is this possible? I know WS-* supports this as does netTCP, however do they play nice when passing credential information like user/pass? If we go to Kerberos, will this impact anything? We may want to do impersonation in the future. If you can when you answer post any reference links about why you're answering the way you are, that would be very helpful to us. Thanks!

    Read the article

  • realloc() & ARC

    - by RynoB
    How would I be able to rewrite the the following utility class to get all the class string values for a specific type - using the objective-c runtime functions as shown below? The ARC documentation specifically states that realloc should be avoided and I also get the following compiler error on this this line: classList = realloc(classList, sizeof(Class) * numClasses); "Implicit conversion of a non-Objective-C pointer type 'void *' to '__unsafe_unretained Class *' is disallowed with ARC" The the below code is a reference to the original article which can be found here. + (NSArray *)classStringsForClassesOfType:(Class)filterType { int numClasses = 0, newNumClasses = objc_getClassList(NULL, 0); Class *classList = NULL; while (numClasses < newNumClasses) { numClasses = newNumClasses; classList = realloc(classList, sizeof(Class) * numClasses); newNumClasses = objc_getClassList(classList, numClasses); } NSMutableArray *classesArray = [NSMutableArray array]; for (int i = 0; i < numClasses; i++) { Class superClass = classList[i]; do { superClass = class_getSuperclass(superClass); if (superClass == filterType) { [classesArray addObject:NSStringFromClass(classList[i])]; break; } } while (superClass); } free(classList); return classesArray; } Your help will be much appreciated. Thanks

    Read the article

  • How to parse kanji numeric characters using ICU?

    - by Aki
    I'm writing a function using ICU to parse an Unicode string which consists of kanji numeric character(s) and want to return the integer value of the string. "?" = 5 "???" = 31 "???????" = 5972 I'm setting the locale to Locale::getJapan() and using the NumberFormat::parse() to parse the character string. However, whenever I pass it any Kanji characters, the parse() method is returning U_INVALID_FORMAT_ERROR. Does anyone know if ICU supports Kanji character strings in the NumberFormat::parse() method? I was hoping that since I'm setting the Locale to Japanese that it would be able to parse Kanji numeric values. Thanks! #include <iostream> #include <unicode/numfmt.h> using namespace std; int main(int argc, char **argv) { const Locale &jaLocale = Locale::getJapan(); UErrorCode status = U_ZERO_ERROR; NumberFormat *nf = NumberFormat::createInstance(jaLocale, status); UChar number[] = {0x4E94}; // Character for '5' in Japanese '?' UnicodeString numStr(number); Formattable formattable; nf->parse(numStr, formattable, status); if (U_FAILURE(status)) { cout << "error parsing as number: " << u_errorName(status) << endl; return(1); } cout << "long value: " << formattable.getLong() << endl; }

    Read the article

  • How to enable and use HTTP PUT and DELETE with Apache2 and PHP?

    - by Andreas Jansson
    Hi, It should be so simple. I've followed every tutorial and forum I could find, yet I can't get it to work. I simply want to build a RESTful API in PHP on Apache2. In my VirtualHost directive I say: <Directory /> AllowOverride All <Limit GET HEAD POST PUT DELETE OPTIONS> Order Allow,Deny Allow from all </Limit> </Directory> Yet every PUT request I make to the server, I get 405 method not supported. Someone advocated using the Script directive, but since I use mod_php, as opposed to CGI, I don't see why that would work. People mention using WebDAV, but to me that seems like overkill. After all, I don't need DAV locking, a DAV filesystem, etc. All I want to do is pass the request on to a PHP script and handle everything myself. I only want to enable PUT and DELETE for the clean semantics. Thanks, Andreas

    Read the article

  • Passing Control's Value to Modal Popup

    - by Sherwin Valdez
    Hello, Just would like know how to pass textbox value to a modal popup after clicking a button using ModalPopUpExtender in ASP.NET, I've tried these codes but seems that I have no luck :( <script runat="server"> protected void Page_Load(object sender, EventArgs e) { Button1.Attributes.Add("onclick", "showModalPopup(); return false;"); } </script> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:Button ID="Button1" runat="server" Text="Button" OnClick='showModalPopup(); return false;' /> <cc1:ModalPopupExtender ID="ModalPopupExtender1" runat="server" TargetControlID="Button1" PopupControlID="Panel1" CancelControlID="btnCancel" OkControlID="btnOkay" BackgroundCssClass="ModalPopupBG"> </cc1:ModalPopupExtender> <asp:Panel ID="Panel1" Style="display: none" runat="server"> <div class="HellowWorldPopup"> <div class="PopupHeader" id="PopupHeader"> Header</div> <div class="PopupBody"> <asp:Label ID="Label1" runat="server"></asp:Label> </div> <div class="Controls"> <input id="btnOkay" type="button" value="Done" /> <input id="btnCancel" type="button" value="Cancel" /> </div> </div> </asp:Panel> javascript function showModalPopup() { //show the ModalPopupExtender var value; value = document.getElementById("TextBox1").value; $get("<%=Label1.ClientID %>").value = value; $find("<%=ModalPopupExtender1.ClientID %>").show(); } I wonder what I miss out :(, Thanks and I hope someone could help me :)

    Read the article

  • Error displaying a WinForm in Design mode with a custom control on it.

    - by George
    I have a UserControl that is part of a Class library. I reference this project from my solution. This adds a control from the referenced project to my toolbox. I add tghe control to a form. Everything looks good, I compile all and run. Perfect... But when I close the .frm with the control on it and re-open it, I get this error. The code continues to run. It may have something to do with namespaces. The original namespace was simply "Design" and this was ambiguous and conflicting so i decided to rename it. I think that's when my problems began. To prevent possible data loss before loading the designer, the following errors must be resolved: 2 Errors Ignore and Continue Why am I seeing this page? Could not find type 'Besi.Winforms.HtmlEditor.Editor'. Please make sure that the assembly that contains this type is referenced. If this type is a part of your development project, make sure that the project has been successfully built using settings for your current platform or Any CPU. Instances of this error (1) 1. There is no stack trace or error line information available for this error. Help with this error Could not find an associated help topic for this error. Check Windows Forms Design-Time error list Forum posts about this error Search the MSDN Forums for posts related to this error The variable 'Editor1' is either undeclared or was never assigned. Go to code Instances of this error (1) 1. BesiAdmin frmOrder.Designer.vb Line:775 Column:1 Show Call Stack at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.Error(IDesignerSerializationManager manager, String exceptionText, String helpLink) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeStatement(IDesignerSerializationManager manager, CodeStatement statement) Help with this error MSDN Help Forum posts about this error Search the MSDN Forums for posts related to this error

    Read the article

  • .NET: How do I know if I have an unmanaged resource?

    - by Shiftbit
    I've read that unmanaged resource are considered file handles, streams, anything low level. Does MSDN or any other source explain how to recognize an unmanaged resource? I can't seem to find any examples on the net that show unmanaged code, all the examples just have comments that say unmanaged code here. Can someone perhaps provide a realworld example where I would handle an unmanaged resources in an IDispose interface? I provided the IDisposable interface for your convience. How do I identify an unmanaged resource? Thanks in advance! IDisposable Reference Public Class Sample : Implements IDisposable Private disposedValue As Boolean = False 'To detect redundant calls ' IDisposable Protected Overridable Sub Dispose(ByVal disposing As Boolean) If Not Me.disposedValue Then If disposing Then ' TODO: free other state (managed objects). End If ' TODO: free your own state (unmanaged objects). ' TODO: set large fields to null. End If Me.disposedValue = True End Sub ' This code added by Visual Basic to correctly implement the disposable pattern. Public Sub Dispose() Implements IDisposable.Dispose ' Do not change this code. Put cleanup code in Dispose(ByVal disposing As Boolean) above. Dispose(True) GC.SuppressFinalize(Me) End Sub End Class

    Read the article

  • Episerver Scheduled Job fails (scheduler service)

    - by Igor
    Our scheduled jobs started failing since yesterday with the following error message: CustomUpdate.Execute - System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Security.Roles.GetRolesForUser(String username) at EPiServer.Security.PrincipalInfo.CreatePrincipal(String username) The scheduled job uses anonymous execution and logs in programmatically using the following call: if (PrincipalInfo.CurrentPrincipal.Identity.Name == string.Empty) { PrincipalInfo.CurrentPrincipal = PrincipalInfo.CreatePrincipal(ApplicationSettings.ScheduledJobUsername); } I have put in some more logging around PrincipalInfo.CreatePrincipal call which is in Episerver.Security and noticed that PrincipalInfo.CreatePrincipal calls System.Web.Security.Roles.GetRolesForUser(username) and Roles.GetRolesForUser(username) returns an empty string array. There were no changes code wise or on the server (updates, etc). I checked that the user name used to run the task is in the database and has roles associated with it. I checked that applicationname is set up correctly and is associated with the user If i run the job manually using the same user it executes with no issues (i know there is a difference between running the job manually and using the scheduler) I also tried creating a new user, that didn’t work either. Has anyone come across the same or similar issue? Any thoughts how to resolve this issue?

    Read the article

  • Lucene complex structure search

    - by archer
    Basically I do have pretty simple database that I'd like to index with Lucene. Domains are: // Person domain class Person { Set<Pair> keys; } // Pair domain class Pair { KeyItem keyItem; String value; } // KeyItem domain, name is unique field within the DB (!!) class KeyItem{ String name; } I've tens of millions of profiles and hundreds of millions of Pairs, however, since most of KeyItem's "name" fields duplicates, there are only few dozens KeyItem instances. Came up to that structure to save on KeyItem instances. Basically any Profile with any fields could be saved into that structure. Lets say we've profile with properties - name: Andrew Morton - eduction: University of New South Wales, - country: Australia, - occupation: Linux programmer. To store it, we'll have single Profile instance, 4 KeyItem instances: name, education,country and occupation, and 4 Pair instances with values: "Andrew Morton", "University of New South Wales", "Australia" and "Linux Programmer". All other profile will reference (all or some) same instances of KeyItem: name, education, country and occupation. My question is, how to index all of that so I can search for Profile for some particular values of KeyItem::name and Pair::value. Ideally I'd like that kind of query to work: name:Andrew* AND occupation:Linux* Should I create custom Indexer and Searcher? Or I could use standard ones and just map KeyItem and Pair as Lucene components somehow?

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • custom C++ boost::lambda expression help

    - by aaa
    hello. A little bit of background: I have some strange multiple nested loops which I converted to flat work queue (basically collapse single index loops to single multi-index loop). right now each loop is hand coded. I am trying to generalized approach to work with any bounds using lambda expressions: For example: // RANGE(i,I,N) is basically a macro to generate `int i = I; i < N; ++i ` // for (RANGE(lb, N)) { // for (RANGE(jb, N)) { // for (RANGE(kb, max(lb, jb), N)) { // for (RANGE(ib, jb, kb+1)) { // is equivalent to something like (overload , to produce range) flat<1, 3, 2, 4>((_2, _3+1), (max(_4,_3), N), N, N) the internals of flat are something like: template<size_t I1, size_t I2, ..., class L1_, class L2, ..._> boost::array<int,4> flat(L1_ L1, L2_ L2, ...){ //boost::array<int,4> current; class variable bool advance; L2_ l2 = L2.bind(current); // bind current value to lambda { L1_ l1 = L1.bind(current); //bind current value to innermost lambda l1.next(); advance = !(l1 < l1.upper()); // some internal logic if (advance) { l2.next(); current[0] = l1.lower(); } } //..., } my question is, can you give me some ideas how to write lambda (derived from boost) which can be bound to index array reference to return upper, lower bounds according to lambda expression? thank you much

    Read the article

  • Binding Data to a TextBlock using Domain Data Source

    - by TFisher
    I have a map with hotspots for each state (done in Expression Blend). I capture each MouseEnter of the state (1 thru 50). I pass that into my Domain Data Source: Dim activebox As Path = TryCast(sender, Path) activebox.Fill = mouseOverColor Dim StateID As Integer = CInt(Right(activebox.Name, 2)) Dim _StateContext As New StateContext myDataGrid.ItemsSource = _StateContext.States _StateContext.Load(_StateContext.GetStateByStateIDQuery(StateID.Text)) The above works fine for a datagrid, listbox and even a dataform. But I created a popup with a stackpanel that has textblocks. popupStatesBox.DataContext = ?????????????? popupStatesBox.IsOpen = True 'popup does open but has no data -- popupStatesBox.xaml <Popup x:Name="popupStatsBox" Margin="8,35,0,8" DataContext="{Binding}" IsOpen="false" Width="268" HorizontalAlignment="Left"> <StackPanel x:Name="Layout" Background="Black"> <TextBlock x:Name="tbState" Text="{Binding StateName /> <TextBlock x:Name="tbAbbrev" Text="{Binding Abbreviation}" /> </StackPanel> </Popup> How do I get the textblocks to display the values from the _StateContext. StackPanel has DataContext but no ItemsSource. What am I missing?

    Read the article

  • Merge replication server side foreign key violation from unpublished table

    - by Reiste
    We are using SQL Server 2005 Merge Replication with SQL CE 3.5 clients. We are using partitions with filtering for the separate subscriptions, and nHibernate for the ORM mapping. There is automatic ID range management from SQL Server for the subscriptions. We have a table, Item, and a table with a foreign key to Item - ItemHistory. Both of these are replicated down, filtered according to the subscription. Item has a column called UserId, and is filtered per subscription with this filter: WHERE UserId IN (SELECT... [complicated subselect]...) ItemHistory hangs off Item in the publication filter articles. On the server, we have a table ItemHistoryExport, which has a foreign key to ItemHistory. ItemHistoryExport is not published. Entries in the Item and ItemHistory tables are never deleted, on the server or the client. However, the "ownership" of items (and hence their ItemHistories) MAY change, which causes them to be moved from one client subscription/partition to another from time to time. When we sync, we occasionally get the following error: A row delete at '48269404 - 4108383dbb11' could not be propagated to 'MyServer\MyInstance.MyDatabase'. This failure can be caused by a constraint violation. The DELETE statement conflicted with the REFERENCE constraint "FK_ItemHistoryExport_ItemHistory". The conflict occurred in database "MyDatabase", table "dbo.ItemHistoryExport", column 'ItemHistoryId'. Can anyone help us understand why this happens? There shouldn't ever be a delete happening on the server side.

    Read the article

  • Compiling Visual c++ programs from the command line and msvcr90.dll

    - by Stanley kelly
    Hi, When I compile my Visual c++ 2008 express program from inside the IDE and redistribute it on another computer, It starts up fine without any dll dependencies that I haven't accounted for. When I compile the same program from the visual c++ 2008 command line under the start menu and redistribute it to the other computer, it looks for msvcr90.dll at start-up. Here is how it is compiled from the command line cl /Fomain.obj /c main.cpp /nologo -O2 -DNDEBUG /MD /ID:(list of include directories) link /nologo /SUBSYSTEM:WINDOWS /ENTRY:mainCRTStartup /OUT:Build\myprogram.ex e /LIBPATH:D:\libs (list of libraries) and here is how the IDE builds it based on the relevant parts of the build log. /O2 /Oi /GL /I clude" /I (list of includes) /D "WIN32" /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /FD /EHsc /MD /Gy /Yu"stdafx.h" /Fp"Release\myprogram" /Fo"Release\\" /Fd"Release\vc90.pdb" /W3 /c /Zi /TP /wd4250 /vd2 Creating command line "cl.exe @d:\myprogram\Release\RSP00000118003188.rsp /nologo /errorReport:prompt" /OUT:"D:\myprgram\Release\myprgram.exe" /INCREMENTAL:NO /LIBPATH:"d:\gtkmm\lib" /MANIFEST /MANIFESTFILE:"Release\myprogam.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"d:\myprogram\Release\myprogram.pdb" /SUBSYSTEM:WINDOWS /OPT:REF /OPT:ICF /LTCG /ENTRY:"mainCRTStartup" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 (list of libraries) Creating command line "link.exe @d:\myprogram\Release\RSP00000218003188.rsp /NOLOGO /ERRORREPORT:PROMPT" /outputresource:"..\Release\myprogram.exe;#1" /manifest .\Release\myprogram.exe.intermediate.manifest Creating command line "mt.exe @d:\myprogram\Release\RSP00000318003188.rsp /nologo" I would like to be able to compile it from the command line and not have it look for such a late version of the runtime dll, like the version compiled from the IDE seems not to do. Both versions pass /MD to the compiler, so i am not sure what to do.

    Read the article

< Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >