Search Results

Search found 10355 results on 415 pages for 'shell extension'.

Page 359/415 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • How to get forkpty to handle redirection and other bash-isms?

    - by Jeremy Friesner
    Hi all, I've got a GUI C++ program that takes a shell command from the user, calls forkpty() and execvp() to execute that command in a child process, while the parent (GUI) process reads the child process's stdout/stderr output and displays it in the GUI. This all works nicely (under Linux and MacOS/X). For example, if the user enters "ls -l /foo", the GUI will display the contents of the /foo folder. However, bash niceties like output redirection aren't handled. For example, if the user enters "echo bar /foo/bar.txt", the child process will output the text "bar /foo/bar.txt", instead of writing the text "bar" to the file "/foo/bar.txt". Presumably this is because execvp() is running the executable command "echo" directly, instead of running /bin/bash and handing it the user's command to massage/preprocess. My question is, what is the correct child process invocation to use, in order to make the system behave exactly as if the user had typed in his string at the bash prompt? I tried wrapping the user's command with a /bin/bash invocation, like this: /bin/bash -c the_string_the_user_entered, but that didn't seem to work. Any hints?

    Read the article

  • IValueConverter from string

    - by Aleksandar Toplek
    I have an Enum that needs to be shown in ComboBox. I have managed to get enum values to combobox using ItemsSource and I'm trying to localize them. I thought that that could be done using value converter but as my enum values are already strings compiler throws error that IValueConverter can't take string as input. I'm not aware of any other way to convert them to other string value. Is there some other way to do that (not the localization but conversion)? I'm using this marku extension to get enum values [MarkupExtensionReturnType(typeof (IEnumerable))] public class EnumValuesExtension : MarkupExtension { public EnumValuesExtension() {} public EnumValuesExtension(Type enumType) { this.EnumType = enumType; } [ConstructorArgument("enumType")] public Type EnumType { get; set; } public override object ProvideValue(IServiceProvider serviceProvider) { if (this.EnumType == null) throw new ArgumentException("The enum type is not set"); return Enum.GetValues(this.EnumType); } } and in Window.xaml <Converters:UserTypesToStringConverter x:Key="userTypeToStringConverter" /> .... <ComboBox ItemsSource="{Helpers:EnumValuesExtension Data:UserTypes}" Margin="2" Grid.Row="0" Grid.Column="1" SelectedIndex="0" TabIndex="1" IsTabStop="False"> <ComboBox.ItemTemplate> <DataTemplate DataType="{x:Type Data:UserTypes}"> <Label Content="{Binding Converter=userTypeToStringConverter}" /> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> And here is converter class, it's just a test class, no localization yet. public class UserTypesToStringConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return (int) ((Data.UserTypes) value) == 0 ? "Fizicka osoba" : "Pravna osoba"; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return default(Data.UserTypes); } }

    Read the article

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • Help setting up command line gist

    - by smotchkkiss
    setup I'm following defunkt's gist setup guide. [smotchkkiss ~]$ sudo gem install gist [smotchkkiss ~]$ git config --global github.user "my github name" [smotchkkiss ~]$ git config --global github.token "my github token" [smotchkkiss ~]$ echo "puts 'hello, gist.'" > hello.rb [smotchkkiss ~]$ gist hello.rb output Usage: open [-e] [-t] [-f] [-W] [-n] [-g] [-h] [-b <bundle identifier>] [-a <application>] [filenames] Help: Open opens files from a shell. By default, opens each file using the default application for that file. If the file is in the form of a URL, the file will be opened as a URL. Options: -a Opens with the specified application. -b Opens with the specified application bundle identifier. -e Opens with TextEdit. -t Opens with default text editor. -f Reads input from standard input and opens with TextEdit. -W, --wait-apps Blocks until the used applications are closed (even if they were already running). -n, --new Open a new instance of the application even if one is already running. -g, --background Does not bring the application to the foreground. -h, --header Searches header file locations for headers matching the given filenames, and opens them. return value nil help! nil return value? What gives? No new gist appears in my My Gists page on github.

    Read the article

  • sqlite is required for merb?

    - by mayank
    I have a question regarding merb dependency with sqlite. I am going to install merb on my m/c and I don't have sqlite installed on my m/c . I tried this command "gem install merb" and saw following error. If there any way to install merb with mysql please tell me. Building native extensions. This could take a while... ERROR: Error installing merb: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for sqlite3.h... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2/ext/do_sqlite3/gem_make.out

    Read the article

  • Zend file upload error

    - by jgnasser
    I am attempting to upload a file using Zend Framework 1.8 and I get some errors. Here is the code snippet: The form element: $element = new Zend_Form_Element_File('doc'); $element->setLabel('Upload an image:') ->setDestination('/path/to/my/upload/folder'); $element->addValidator('Count', false, 1); $element->addValidator('Size', false, 102400); $element->addValidator('Extension', false, 'jpg,png,gif,doc,docx,xls,xlsx,txt'); $this->addElement($element); The code for handling the upload: $adapter = new Zend_File_Transfer_Adapter_Http(); if (!$adapter->receive()) { $messages = $adapter->getMessages(); echo implode("\n", $messages); } This works fine and the file is uploaded but I get the error "The file 'doc' was illegal uploaded, possible attack". I managed to get past this problem by not creating a new Zend_File_Transfer_Adapter_Http() but instead using: $adapter = $form->doc->getTransferAdapter(); With this modification, the first error disappears but now I have an error saying I have provided 2 files instead of one (probably its reading the temp) and when I adjust the validator to accept two files I then get the arror saying "The file 'doc' was not found" and the upload now fails completely. Please help

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

  • Call Init from XUL after Page Loads (Firefox add-on)

    - by mattyboy123
    Hi all, I've been working on some code in js/html and it works great. I'm now trying to package it into an add-on for Firefox, and having some issues getting the XUL document correct. PLAIN OLD HTML/JS In my html test file between the <head></head> I have: <script type="text/javascript" src="js/MyCode.js"></script> At the end of the test file before the </body> I have: <script type="text/javascript">MyCode.Static.Init();</script> FIREFOX ADD-ON: OVERLAY.XUL In an overlay.xul file in the extension package I have : <?xml version="1.0"?> <overlay id="mycode" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="chrome://mycode/content/MyCode.js"></script> <script> window.addEventListener("load", function () { gBrowser.addEventListener("load",MyCode.Static.Init,true); }, false); </script> </overlay> This does not seem to enter the method, but then again I'm not even sure if I've got the listeners firing properly. Would this be the correct way to duplicate what I was doing in plain old html/js ?

    Read the article

  • How can I terminate a system command with alarm in Perl?

    - by rockyurock
    I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period. If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file and calling this Perl file (alarm.pl) in the original Perl File using the system command. But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File. Could anybody please let me know the way to terminate a system() call or take the output in that way I used above? main.pl my @output = system("alarm.pl"); print"one iperf completed\n"; open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; alarm.pl alarm 30; my @output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001"); open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; In both ways display.txt is always empty.

    Read the article

  • MSBuild "Wrapper" fails while VS2010 "Pure" compile succeeds for MFC application in CruiseControl.NE

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (devenv) works, but a wrapper MSBuild script run via the CCNet MSBuild task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level.

    Read the article

  • Hopping from a C++ to a Perl/Unix job

    - by rocknroll
    Hi all, I have been a C++ / Linux Developer till now and I am adept in this stack. Of late I have been getting opportunities that require Perl, Unix (with knowledge of C++,shell scripting) expertise. Organizations are showing interest even though I don't have much scripting experience to boast off. The role is more in a Support, maintenance project involving SQL as well. Off late I am in a fix whether to forgo these offers or not. I don't know the dynamics of an IT organization and thus on one hand I fear that my C++ experience will be nullified and on the positive side I am getting to work on a new technology stack which will only add to my skill set. I am sure, most of you at some point of time have encountered such dilemmas and would have taken some decision. I want you to share your perspectives on such a scenario where a person is required to change his/her technology stack when changing his/her job. What are the merits and demerits in going with either of the choices? Also I know that C++ isn't going anywhere in the near future. What about perl? I have no clue as to what the future holds for perl developer? Whether there are enough opportunities for a perl developer? I am asking this question here because most of my fellow programmers face this career choice dilemma. Thanks.

    Read the article

  • How to properly combine two files in XAML in Microsoft Blend?

    - by MartyIX
    Hello, I have a test project with the file MainWindow.xaml with the content: <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" xmlns:view="clr-namespace:Sokoban.View;assembly=Solvers" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <ad:DockingManager x:Name="dockingManager"> <ad:ResizingPanel Orientation="Vertical"> <view:Solvers x:Name="solvers" diag:PresentationTraceSources.TraceLevel="High" /> <!-- LINE BELOW DEMONSTRATES WORKING CODE INSTEAD OF LINE ABOVE --> <!--<ad:DocumentPane Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <ad:DockableContent x:Name="classesContent" Title="Classes"> <TextBlock>test</TextBlock> </ad:DockableContent> </ad:DocumentPane>--> </ad:ResizingPanel> </ad:DockingManager> </Window> and in another project I have the file Solvers.xaml: <ad:DocumentPane x:Class="Sokoban.View.Solvers" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> </ad:DocumentPane> When I open my Visual Studio solution in Microsoft Blend 4 then I see the error: InvalidOperationException: DocumentPane must be put under a DockingManager! when I open either MainWindow.xaml or Solvers.xaml. It is all right in Solvers.xaml because there really is no DockingManager but MainWindow.xaml should work, shouldn't it? How to solve the problem? Note: It seems to me that the files are processed separately and because the file Solvers.xaml contains the error the MainWindow.xaml file also contains the very same error. Note 2: XAML files use AvalonDock library Is there a way how to say that Solvers.xaml is only an extension of another file? Thank you for any help!

    Read the article

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • VBScript Catching Erroring Varialble Value

    - by Soren
    I have a VB Script (.vbs file) that is just a simple directory listing of a drive. It will the base of a drive back up script. But when running it as it is below, I am getting a Permission Denied error on some folder. What I need to find is what that folder is so I can figure out what the problem is with the folder. The line that is giving the error is "For Each TempFolder In MoreFolders". So what I am trying to figure out is how to WScript.Echo the current path (objDirectory) if there is an error. I am not sure if it matters much, but just in case, the error that I am getting is Permission Denied 800A0046 on line 12. So some folder, I do not know which one, is not letting me look inside. Set WSShell = WScript.CreateObject("WScript.Shell") Set objFSO = CreateObject ("Scripting.FileSystemObject") Dim FolderArr() FolderCount = 0 TopCopyFrom = "G:\" Sub WorkWithSubFolders(objDirectory) Set MoreFolders = objDirectory.SubFolders 'The next line is where the error occurs (line 12) For Each TempFolder In MoreFolders FolderCount = FolderCount + 1 ReDim Preserve FolderArr(FolderCount) FolderArr(FolderCount) = TempFolder.Path ' WScript.Echo TempFolder.Path WorkWithSubFolders(TempFolder) Next End Sub ReDim Preserve FolderArr(FolderCount) FolderArr(FolderCount) = TopCopyFrom Set objDirectory = objFSO.GetFolder(TopCopyFrom) WorkWithSubFolders(objDirectory) Set objDirectory = Nothing WScript.Echo "FolderCount = " & FolderCount WScript.Sleep 30000 Set objFSO = Nothing Set WSShell = Nothing

    Read the article

  • forward/strong enum in VS2010

    - by Noah Roberts
    At http://blogs.msdn.com/vcblog/archive/2010/04/06/c-0x-core-language-features-in-vc10-the-table.aspx there is a table showing C++0x features that are implemented in 2010 RC. Among them are listed forwarding enums and strongly typed enums but they are listed as "partial". The main text of the article says that this means they are either incomplete or implemented in some non-standard way. So I've got VS2010RC and am playing around with the C++0x features. I can't figure these ones out and can't find any documentation on these two features. Not even the simplest attempts compile. enum class E { test }; int main() {} fails with: 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2332: 'enum' : missing tag name 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2236: unexpected 'class' 'E'. Did you forget a ';'? 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C3381: 'E' : assembly access specifiers are only available in code compiled with a /clr option 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2143: syntax error : missing ';' before '}' 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== int main() { enum E : short; } Fails with: 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(513): warning C4480: nonstandard extension used: specifying underlying type for enum 'main::E' 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(513): error C2059: syntax error : ';' ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== So it seems it must be some totally non-standard implementation that has allowed them to justify calling this feature "partially" done. How would I rewrite that code to access the forwarding and strong type feature?

    Read the article

  • how to fix: ctags null expansion of name pattern "\1"

    - by bua
    Hi, As the title points I have problem with ctags when trying to parse user-defined language. Basically I've followed those instructions. The quickest and easiest way to do this is by defining a new language using the program options. In order to have Swine support available every time I start ctags, I will place the following lines into the file $HOME/.ctags, which is read in every time ctags starts: --langdef=swine --langmap=swine:.swn --regex-swine=/^def[ \t]*([a-zA-Z0-9_]+)/\1/d,definition/ The first line defines the new language, the second maps a file extension to it, and the third defines a regular expression to identify a language definition and generate a tag file entry for it. I've tried different flags: b,e for regex. My definition of tag is: --regex-q=/^[ \t]*[^[:space:]]*[:space:]*:[:space:]*{/\l/f,function/b When I replace \1 with anything else (ascii caracter set ), It works. the output is: (--regex-q=/^[ \t]*[^[:space:]]*[:space:]*:[:space:]*{/my function name/f,function/b) !_TAG_FILE_FORMAT 2 /extended format; --format=1 will not append ;" to lines/ !_TAG_FILE_SORTED 1 /0=unsorted, 1=sorted, 2=foldcase/ !_TAG_PROGRAM_AUTHOR Darren Hiebert /[email protected]/ !_TAG_PROGRAM_NAME Exuberant Ctags // !_TAG_PROGRAM_URL http://ctags.sourceforge.net /official site/ !_TAG_PROGRAM_VERSION 5.8 // my function name file.q /^.ras.getLocation:{[u]$/;" f my function name file.q /^.a.getResource:{[u; pass]$/;" f my function name file.q /^.a.init:{$/;" f my function name file.q /^.a.kill:{[u; force]$/;" f my function name file.q /^.asdf.status:{[what; u]$/;" f my function name file.q /^.pc:{$/;" f Why \1 doesn't work? (I've tried all 1-9)

    Read the article

  • Qt QFileDialog - native dialogs only with static functions?

    - by darron
    I'm trying to simply save a file. However, I need a filename entered without a suffix to automatically get a default suffix (which setDefaultSuffix() does). I'd rather not completely lose the native save dialog just for this. exec() is not overloaded from QDialog, so it totally bypasses the native hook (ignoring the DontUseNativeDialog option even if it's false). If I disable the file overwrite warning and append the default suffix myself after the function returns, then I'd be re-opening the dialog if the user did not want to overwrite... and that's just ugly. Is there some signal I can catch and quickly inject the default suffix if it's not there? I'm guessing not, since it's a native dialog. Is there something I'm doing wrong with the filter? I only have one filter choice. It should use that extension. This seems pretty lame. Launching the save dialog and simply typing "test" should never result in an extensionless file. "test.", yes. "test" no way. That'll really confuse the users when they hit Load and can't see the file they just saved. I guess the cross-platform part of Qt is giving me lowest common denominator file dialog functionality?

    Read the article

  • Issue intercepting property in Silverlight application

    - by joblot
    I am using Ninject as DI container in a Silverlight application. Now I am extending the application to support interception and started integrating DynamicProxy2 extension for Ninject. I am trying to intercept call to properties on a ViewModel and ending up getting following exception: “Attempt to access the method failed: System.Reflection.Emit.DynamicMethod..ctor(System.String, System.Type, System.Type[], System.Reflection.Module, Boolean)” This exception is thrown when invocation.Proceed() method is called. I tried two implementations of the interceptor and they both fail public class NotifyPropertyChangedInterceptor: SimpleInterceptor { protected override void AfterInvoke(IInvocation invocation) { var model = (IAutoNotifyPropertyChanged)invocation.Request.Proxy; model.OnPropertyChanged(invocation.Request.Method.Name.Substring("set_".Length)); } } public class NotifyPropertyChangedInterceptor: IInterceptor { public void Intercept(IInvocation invocation) { invocation.Proceed(); var model = (IAutoNotifyPropertyChanged)invocation.Request.Proxy; model.OnPropertyChanged(invocation.Request.Method.Name.Substring("set_".Length)); } } I want to call OnPropertyChanged method on the ViewModel when property value is set. I am using Attribute based interception. [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] public class NotifyPropertyChangedAttribute : InterceptAttribute { public override IInterceptor CreateInterceptor(IProxyRequest request) { if(request.Method.Name.StartsWith("set_")) return request.Context.Kernel.Get<NotifyPropertyChangedInterceptor>(); return null; } } I tested the implementation with a Console Application and it works alright. I also noted in Console Application as long as I had Ninject.Extensions.Interception.DynamicProxy2.dll in same folder as Ninject.dll I did not have to explicitly load DynamicProxy2Module into the Kernel, where as I had to explicitly load it for Silverlight application as follows: IKernel kernel = new StandardKernel(new DIModules(), new DynamicProxy2Module()); Could someone please help? Thanks

    Read the article

  • Providing File permissions in Installshield

    - by Pawan Kumar
    I have created an installer in Installshield X. I want to give 'write permissions' to few files when the installation is done in Non-Admin Windows accounts ( by default it will have only 'read' permission). If I select individual file and go to properties (inside Installshield),i have permissions tab where they have provided options like Domain , Readonly, Full Control, Modify etc. I have tested these options but it doesn't effect the msi file.(specific files doesn't have write permission). Is there something wrong I am doing? There is another way of doing this, writing the script Set objShell=CreateObject("WScript.Shell") installDir = Session.Property("INSTALLDIR.5A884667_3CC4_41EC_B0F2_BEEAB457BB8C") supportDir = Session.Property("SUPPORTDIR") length = Len(installDir) lastChar = Right(installDir, 1) if (lastChar = "\") Then installDir = Left(installDir, length - 1) end if 'MsgBox supportDir & "\setacl.exe """ & installDir & """ /dir /set S-1-5-32-545 /full /p:yes /sid /silent" objshell.Run supportDir & "\setacl.exe """ & installDir & """ /dir /set S-1-5-32-545 /full /p:yes /sid /silent",0,true Can someone please explain me what is going on here? those last set s-1-5-32-545. Thanks

    Read the article

  • PyGTK/GIO: monitor directory for changes recursively

    - by detly
    Take the following demo code (from the GIO answer to this question), which uses a GIO FileMonitor to monitor a directory for changes: import gio def directory_changed(monitor, file1, file2, evt_type): print "Changed:", file1, file2, evt_type gfile = gio.File(".") monitor = gfile.monitor_directory(gio.FILE_MONITOR_NONE, None) monitor.connect("changed", directory_changed) import glib ml = glib.MainLoop() ml.run() After running this code, I can then create and modify child nodes and be notified of the changes. However, this only works for immediate children (I am aware that the docs don't say otherwise). The last of the following shell commands will not result in a notification: touch one mkdir two touch two/three Is there an easy way to make it recursive? I'd rather not manually code something that looks for directory creation and adds a monitor, removing them on deletion, etc. The intended use is for a VCS file browser extension, to be able to cache the statuses of files in a working copy and update them individually on changes. So there might by anywhere from tens to thousands (or more) directories to monitor. I'd like to just find the root of the working copy and add the file monitor there. I know about pyinotify, but I'm avoiding it so that this works under non-Linux kernels such as FreeBSD or... others. As far as I'm aware, the GIO FileMonitor uses inotify underneath where available, and I can understand not emphasising the implementation to maintain some degree of abstraction, but it suggested to me that it should be possible. (In case it matters, I originally posted this on the PyGTK mailing list.)

    Read the article

  • Please explain JSONP

    - by Cheeso
    I don't understand jsonp. I understand JSON. I don't understand JSONP. Wikipedia is the top search result for JSONP. It says JSONP or "JSON with padding" is a JSON extension wherein a prefix is specified as an input argument of the call itself. Huh? What call? That doesn't make any sense to me. JSON is a data format. There's no call. The 2nd search result is from some guy named Remy, who writes JSONP is script tag injection, passing the response from the server in to a user specified function. I can sort of understand that, but it's still not making any sense. What is JSONP, why was it created (what problem does it solve), and why would I use it? Addendum: I've updated Wikipedia with a clearer and more thorough description of JSONP, based on jvenema's answer. Thanks, all.

    Read the article

  • Prevent Ninject from calling Initialize multiple times when binding to several interfaces

    - by Ahe
    Hi We have a concrete singleton service which implements Ninject.IInitializable and 2 interfaces. Problem is that services Initialize-methdod is called 2 times, when only one is desired. We are using .NET 3.5 and Ninject 2.0.0.0. Is there a pattern in Ninject prevent this from happening. Neither of the interfaces implement Ninject.IInitializable. the service class is: public class ConcreteService : IService1, IService2, Ninject.IInitializable { public void Initialize() { // This is called twice! } } And module looks like this: public class ServiceModule : NinjectModule { public override void Load() { this.Singleton<Iservice1, Iservice2, ConcreteService>(); } } where Singleton is an extension method defined like this: public static void Singleton<K, T>(this NinjectModule module) where T : K { module.Bind<K>().To<T>().InSingletonScope(); } public static void Singleton<K, L, T>(this NinjectModule module) where T : K, L { Singleton<K, T>(module); module.Bind<L>().ToMethod(n => n.Kernel.Get<T>()); } Of course we could add bool initialized-member to ConcreteService and initialize only when it is false, but it seems quite a bit of a hack. And it would require repeating the same logic in every service that implements two or more interfaces. Thanks for all the answers! I learned something from all of them! (I am having a hard time to decide which one mark correct). We ended up creating IActivable interface and extending ninject kernel (it also removed nicely code level dependencies to ninject, allthough attributes still remain).

    Read the article

  • Space in Directory Parameter of svcutil.exe

    - by Drew Frisk
    I'm attempting to download metadata for a WCF service using svcutil but I'm running into issues with the /directory:< parameter. The directory I want to save to has a space in it: C:\Service References\Logging so when I execute /t:metadata I receive the following error: Error: The directory 'C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\References\Logging' could not be found. Verify that the directory exists and that you have the appropriate permissions to read it. It looks to me like the space in "Service References" is causing the issue. From my understanding of command shell (which is very little) spaces act as delimiters for an executable. So I tried escaping the space with a carrot Service^ References and surrounding the path in double quotes "C:\Service References\Logging" but neither of those seem to be working, as the /directory: parameter doesn't recognize them as valid characters in the value. I haven't been able to find any direction in regards to this and svcutil, so I'm at a loss right now. I could download the files to a temp folder and then move them, but I would prefer not to take that approach. I would appreciate any direction that could be given on trying to resolve this. Thanks in advance.

    Read the article

  • Determining failing sectors on portable flash memory

    - by Faxwell Mingleton
    I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc). I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts). Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal? Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks? In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory? Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself. If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :) Thanks!

    Read the article

  • When sending headers to download a PDF, Safari appends .html

    - by alex
    Here is the request and response headers http://www.example.com/get/pdf GET /~get/pdf HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.example.com Cookie: etc HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 02:20:43 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 X-Powered-By: Me Expires: Thu, 19 Nov 1981 08:52:00 GMT Pragma: no-cache Cache-Control: private Content-Disposition: attachment; filename="File #1.pdf" Content-Length: 18776 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 ---------------------------------------------------------- Basically, the response headers are sent by DOMPDF's stream() method. In Firefox, the file is prompted as File #1.pdf. However, in Safari, the file is saved as File #1.pdf.html. Does anyone know why Safari is appending the html extension to the filename?

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >