Search Results

Search found 19928 results on 798 pages for 'static resource'.

Page 325/798 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Separating user resources - Windows Server 2008 (Terminal Server)

    - by Christopher Wilson
    At the moment I am running a Windows Terminal Server 2008 for around 10 clients that use the server to run programs and access data. Is there anyway to separate the resources of each user so that they do not impact each other in terms of resources. User 1: Opens program User 2: Notices slow down I have looked into using Windows System Resource Manager but do not know if it provides what I need and if there are any other 3rd party tools that also provide this functionality. Any answer is appreciated. Server Specs: HP ProLiant ML110 G7 Processor: Intel® Xeon® E3-1220 (4 core, 3.1 GHz, 8MB, 80W, 1333/t) RAM: 12GB DDR3 ECC 1TB HDD

    Read the article

  • Singleton with inheritance, Derived class is not able to get instantiated in parent?

    - by yesraaj
    Below code instantiates a derived singleton object based on environment variable. The compiler errors saying error C2512: 'Dotted' : no appropriate default constructor. I don't understand what the compiler is complaining about. #include <stdlib.h> #include <iostream> #include <string> using namespace std; class Dotted; class Singleton{ public: static Singleton instant(){ if (!instance_) { char * style = getenv("STYLE"); if (!style){ if (strcmp(style,"dotted")==0) { instance_ = new Dotted(); return *instance_; } } else{ instance_ = new Singleton(); return *instance_; } } return *instance_; } void print(){cout<<"Singleton";} ~Singleton(){}; protected: Singleton(){}; private: static Singleton * instance_; Singleton(const Singleton & ); void operator=(const Singleton & ); }; class Dotted:public Singleton{ public: void print(){cout<<"Dotted";} protected: Dotted(); }; Dotted::Dotted():Singleton(){} int main(){ Singleton::instant().print(); cin.get(); }

    Read the article

  • Mocking with Boost::Test

    - by Billy ONeal
    Hello everyone :) I'm using the Boost::Test library for unit testing, and I've in general been hacking up my own mocking solutions that look something like this: //In header for clients struct RealFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return FindFirstFile(lpFileName, lpFindFileData); }; }; template <typename FirstFile_T = RealFindFirstFile> class DirectoryIterator { //.. Implementation } //In unit tests (cpp) #define THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING 42 struct FakeFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING; }; }; BOOST_AUTO_TEST_CASE( MyTest ) { DirectoryIterator<FakeFindFirstFile> LookMaImMocked; //Test } I've grown frustrated with this because it requires that I implement almost everything as a template, and it is a lot of boilerplate code to achieve what I'm looking for. Is there a good method of mocking up code using Boost::Test over my Ad-hoc method? I've seen several people recommend Google Mock, but it requires a lot of ugly hacks if your functions are not virtual, which I would like to avoid. Oh: One last thing. I don't need assertions that a particular piece of code was called. I simply need to be able to inject data that would normally be returned by Windows API functions.

    Read the article

  • why assign null value or another default value firstly?

    - by Phsika
    i try to generate some codes. i face to face delegates. Everythings is ok.(Look below) But appearing a warning: you shold assing value why? but second code below is ok. namespace Delegates { class Program { static void Main(string[] args) { HesapMak hesapla = new HesapMak(); hesapla.Calculator = new HesapMak.Hesap(hesapla.Sum); double sonuc = hesapla.Calculator(34, 2); Console.WriteLine("Toplama Sonucu:{0}",sonuc.ToString()); Console.ReadKey(); } } class HesapMak { public double Sum(double s1, double s2) { return s1 + s2; } public double Cikarma(double s1, double s2) { return s1 - s2; } public double Multiply(double s1, double s2) { return s1 * s2; } public double Divide(double s1, double s2) { return s1 / s2; } public delegate double Hesap(double s1, double s2); public Hesap Calculator; ----&#60; they want me assingn value } } namespace Delegates { class Program { static void Main(string[] args) { HesapMak hesapla = new HesapMak(); hesapla.Calculator = new HesapMak.Hesap(hesapla.Sum); double sonuc = hesapla.Calculator(34, 2); Console.WriteLine("Toplama Sonucu:{0}",sonuc.ToString()); Console.ReadKey(); } } class HesapMak { public double Sum(double s1, double s2) { return s1 + s2; } public double Cikarma(double s1, double s2) { return s1 - s2; } public double Multiply(double s1, double s2) { return s1 * s2; } public double Divide(double s1, double s2) { return s1 / s2; } public delegate double Hesap(double s1, double s2); public Hesap Calculator=null; } }

    Read the article

  • Locking semaphores in C problem sys/sem

    - by Vojtech R.
    Hi, I have problem with my functions. Sometimes two processes enter into critical section. I can't find problem in this after I spent 10 hours by debugging. On what I should aim? // lock semaphore static int P(int sem_id) { struct sembuf sem_b; sem_b.sem_num = 0; sem_b.sem_op = -1; /* P() */ sem_b.sem_flg = 0; if (semop(sem_id, &sem_b, 1) == -1) { print_error("semop in P", errno); return(0); } return(1); } // unlock semaphore static int V(int sem_id) { struct sembuf sem_b[1]; sem_b.sem_num = 0; sem_b.sem_op = 1; /* V() */ sem_b.sem_flg = 0; if (semop(sem_id, &sem_b, 1) == -1) { print_error("semop in V", errno); return(0); } return(1); } The action loop: int mutex; if ((mutex=semget(key, 1, 0666))>=0) { // semaphore exists } while(1) { P(mutex); assert(get_val(mutex)==0); (*action)++; V(mutex); } Many thanks

    Read the article

  • In Varnish stats, what does "Backend conn. reuses" and "recycles" mean?

    - by electblake
    I have varnish installed and I think it's working properly (not sure if it matters but I am using iptables reroute method to route ports incoming:80 > varnish:8080 > apache:80 Anyway, In varnishstat I see a pretty high Hitrate average (60-80%) which I am working on but I am unclear at what all of the stats presented by varnishstat Specifically the following Backend stats: 380 0.00 0.26 Backend conn. success 10122 15.00 6.85 Backend conn. reuses 267 0.00 0.18 Backend conn. was closed 10391 15.00 7.04 Backend conn. recycles I've read a blog post called "Varnishstat for dummies" which outlines a lot of details of varnishstat (I recommend it for beginners) but it does not go over these Backend stats. Feel free to explain here or link to a resource I've missed :) thanks!

    Read the article

  • C# Error reading two dates from a binary file

    - by Jamie
    Hi all, When reading two dates from a binary file I'm seeing the error below: "The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'. Parameter name: chars" My code is below: static DateTime[] ReadDates() { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Open, System.IO.FileAccess.Read); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryReader br = new System.IO.BinaryReader(appData)) { while (br.PeekChar() > 0) { result.Add(new DateTime(br.ReadInt64())); } br.Close(); } return result.ToArray(); } static void WriteDates(IEnumerable<DateTime> dates) { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Create, System.IO.FileAccess.Write); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryWriter bw = new System.IO.BinaryWriter(appData)) { foreach (DateTime date in dates) bw.Write(date.Ticks); bw.Close(); } } What could be the cause? Thanks

    Read the article

  • JButton Layout Issue

    - by Tom Johnson
    I'm putting together the basic layout for a contacts book, and I want to know how I can make the 3 test buttons span from edge to edge just as the arrow buttons do. private static class ButtonHandler implements ActionListener { public void actionPerformed(ActionEvent e) { System.out.println("Code Placeholder"); } } public static void main(String[] args) { //down button ImageIcon downArrow = new ImageIcon("down.png"); JButton downButton = new JButton(downArrow); ButtonHandler downListener = new ButtonHandler(); downButton.addActionListener(downListener); //up button ImageIcon upArrow = new ImageIcon("up.png"); JButton upButton = new JButton(upArrow); ButtonHandler upListener = new ButtonHandler(); upButton.addActionListener(upListener); //contacts JButton test1Button = new JButton("Code Placeholder"); JButton test2Button = new JButton("Code Placeholder"); JButton test3Button = new JButton("Code Placeholder"); Box box = Box.createVerticalBox(); box.add(test1Button); box.add(test2Button); box.add(test3Button); JPanel content = new JPanel(); content.setLayout(new BorderLayout()); content.add(box, BorderLayout.CENTER); content.add(downButton, BorderLayout.SOUTH); content.add(upButton, BorderLayout.NORTH); JFrame window = new JFrame("Contacts"); window.setContentPane(content); window.setSize(400, 600); window.setLocation(100, 100); window.setVisible(true); }

    Read the article

  • Producing Mini Dumps for _caught_ SEH exceptions in mixed code DLL

    - by Assaf Lavie
    I'm trying to use code similar to clrdump to create mini dumps in my managed process. This managed process invokes C++/CLI code which invokes some native C++ static lib code, wherein SEH exceptions may be thrown (e.g. the occasional access violation). C# WinForms -> C++/CLI DLL -> Static C++ Lib -> ACCESS VIOLATION Our policy is to produce mini dumps for all SEH exceptions (caught & uncaught) and then translate them to C++ exceptions to be handled by application code. This works for purely native processes just fine; but when the application is a C# application - not so much. The only way I see to produce dumps from SEH exceptions in a C# process is to not catch them - and then, as unhandled exceptions, use the Application.ThreadException handler to create a mini dump. The alternative is to let the CLR translate the SEH exception into a .Net exception and catch it (e.g. System.AccessViolationException) - but that would mean no dump is created, and information is lost (stack trace information in Exception isn't as rich as the mini dump). So how can I handle SEH exceptions by both creating a minidump and translating the exception into a .Net exception so that my application may try to recover?

    Read the article

  • Convert function to read from string instead of file in C

    - by Dusty
    I've been tasked with updating a function which currently reads in a configuration file from disk and populates a structure: static int LoadFromFile(FILE *Stream, ConfigStructure *cs) { int tempInt; ... if ( fscanf( Stream, "Version: %d\n",&tempInt) != 1 ) { printf("Unable to read version number\n"); return 0; } cs->Version = tempInt; ... } to one which allows us to bypass writing the configuration to disk and instead pass it directly in memory, roughly equivalent to this: static int LoadFromString(char *Stream, ConfigStructure *cs) A few things to note: The current LoadFromFile function is incredibly dense and complex, reading dozens of versions of the config file in a backward compatible manner, which makes duplication of the overall logic quite a pain. The functions that generate the config file and those that read it originate in totally different parts of the old system and therefore don't share any data structures so I can't pass those directly. I could potentially write a wrapper, but again, it would need to handle any structure passed in in a backwards compatible manner. I'm tempted to just pass the file as is in as a string (as in the prototype above) and convert all the fscanf's to sscanf's but then I have to handle incrementing the pointer along (and potentially dealing with buffer overrun errors) manually. This has to remain in C, so no C++ functionality like streams can help here Am I missing a better option? Is there some way to create a FILE * that actually just points to a location in memory instead of on disk? Any pointers, suggestions or other help is greatly appreciated.

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

  • Prevent Ninject from calling Initialize multiple times when binding to several interfaces

    - by Ahe
    Hi We have a concrete singleton service which implements Ninject.IInitializable and 2 interfaces. Problem is that services Initialize-methdod is called 2 times, when only one is desired. We are using .NET 3.5 and Ninject 2.0.0.0. Is there a pattern in Ninject prevent this from happening. Neither of the interfaces implement Ninject.IInitializable. the service class is: public class ConcreteService : IService1, IService2, Ninject.IInitializable { public void Initialize() { // This is called twice! } } And module looks like this: public class ServiceModule : NinjectModule { public override void Load() { this.Singleton<Iservice1, Iservice2, ConcreteService>(); } } where Singleton is an extension method defined like this: public static void Singleton<K, T>(this NinjectModule module) where T : K { module.Bind<K>().To<T>().InSingletonScope(); } public static void Singleton<K, L, T>(this NinjectModule module) where T : K, L { Singleton<K, T>(module); module.Bind<L>().ToMethod(n => n.Kernel.Get<T>()); } Of course we could add bool initialized-member to ConcreteService and initialize only when it is false, but it seems quite a bit of a hack. And it would require repeating the same logic in every service that implements two or more interfaces. Thanks for all the answers! I learned something from all of them! (I am having a hard time to decide which one mark correct). We ended up creating IActivable interface and extending ninject kernel (it also removed nicely code level dependencies to ninject, allthough attributes still remain).

    Read the article

  • Java - abstract class, equals(), and two subclasses

    - by msr
    Hello, I have an abstract class named Xpto and two subclasses that extend it named Person and Car. I have also a class named Test with main() and a method foo() that verifies if two persons or cars (or any object of a class that extends Xpto) are equals. Thus, I redefined equals() in both Person and Car classes. Two persons are equal when they have the same name and two cars are equal when they have the same registration. However, when I call foo() in the Test class I always get "false". I understand why: the equals() is not redefined in Xpto abstract class. So... how can I compare two persons or cars (or any object of a class that extends Xpto) in that foo() method? In summary, this is the code I have: public abstract class Xpto { } public class Person extends Xpto{ protected String name; public Person(String name){ this.name = name; } public boolean equals(Person p){ System.out.println("Person equals()?"); return this.name.compareTo(p.name) == 0 ? true : false; } } public class Car extends Xpto{ protected String registration; public Car(String registration){ this.registration = registration; } public boolean equals(Car car){ System.out.println("Car equals()?"); return this.registration.compareTo(car.registration) == 0 ? true : false; } } public class Teste { public static void foo(Xpto xpto1, Xpto xpto2){ if(xpto1.equals(xpto2)) System.out.println("xpto1.equals(xpto2) -> true"); else System.out.println("xpto1.equals(xpto2) -> false"); } public static void main(String argv[]){ Car c1 = new Car("ABC"); Car c2 = new Car("DEF"); Person p1 = new Person("Manel"); Person p2 = new Person("Manel"); foo(p1,p2); } }

    Read the article

  • Why isn't my new object being seen? C#

    - by Dan
    I am getting ready to take a class at a college on C#. I have been reading up on it a lot and decided to start a fun project. Here is what my project consists of: Main Control Form Configuration Form Arduino Program.cs calls Configuration.cs at start. This is where pin modes for the Arduino are set and where a timer is set. When I set these values, they get sent to MainControl.cs. When I hit the "Save" button in Configuration.cs, a MainControl.cs object is created [[I am correct in that?]] All of those values that were sent by Configuration.cs had corresponding setters that set private static variables in MainControl.cs [[ I don't really know if that is the preferred way, I am most definetly open to any suggestion anyone has]] MainControl.cs uses its default constructor, and this constructor calls a method that creates an arduino object from one of the private variables (serialPort) [[ Using this Arduino class Firmata.NET ]] When the arduino object is created, I know (I guess I do) because the form takes a few seconds to come up (As opposed to not using serial port) My problem is this: I do not understand why nothing can see the object I have been very wordy, I apologize if I wasn't concise. Here is the code: public partial class CMainControl : Form { private static string serialPort; public CMainControl() { InitializeComponent(); createArduino(); updateConfig(); // Change label values to values set in configuration } private void createArduino() { Arduino arduino = new Arduino(serialPort); } In Configuration.cs, when I set the serial port through a combobox, the value is sent to MainControl.cs just fine. Here is the error I get: Error 1 The name 'arduino' does not exist in the current context C:\Programming\Visual Studio\Workhead Demo\Workhead Demo\CMainControl.cs 94 13 Workhead Demo Please let me know if anyone can help and/or offer pointers, and please let me know if I didn't post or format anything correctly. Thank you very much :)

    Read the article

  • Set up Glassfish connection pool to talk to a database on a Ubuntu VPS

    - by Harry Pham
    On my Ubuntu VPS, i have a mysql server running and a Glassfish 3.0.1 Application Server running. And I am having a hard to have my GF successfully ping the database. Here is my GF set up Assume: x.y.z.t is the ip of my VPS Resource Type: javax.sql.ConnectionPoolDataSource User: root DatabaseName: scholar Url: jdbc:mysql://x.y.z.t:3306/scholar URL: jdbc:mysql://x.y.z.t:3306/scholar Password: xxxx PortNumber: 3306 ServerName: x.y.z.t Inside my glassfish3/glassfish/lib, I have my mysql-connector-java-5.1.13-bin.jar Inside the database, table mysql here is the result of the query select User, Host from user; +------------------+-----------+ | User | Host | +------------------+-----------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | yunaeyes | +------------------+-----------+ Now from my machine, if I try to connect to this db via mysql browser (mysql client software), well I cant. Well from the table above, seem like it only allow localhost to connect to this db. Keep in mind that both my db and my GF are on the same VPS. Please help

    Read the article

  • Problem with DataTrigger binding - setters are not being called

    - by aoven
    I have a Command bound to a Button in XAML. When executed, the command changes a property value on the underlying DataContext. I would like the button's Content to reflect the new value of the property. This works*: <Button Command="{x:Static Member=local:MyCommands.TestCommand}" Content="{Binding Path=TestProperty, Mode=OneWay}" /> But this doesn't: <Button Command="{x:Static Member=local:MyCommands.TestCommand}"> <Button.Style> <Style TargetType="{x:Type Button}"> <Style.Triggers> <DataTrigger Binding="{Binding Path=TestProperty, Mode=OneWay}" Value="True"> <DataTrigger.Setters> <Setter Property="Content" Value="Yes"/> </DataTrigger.Setters> </DataTrigger> <DataTrigger Binding="{Binding Path=TestProperty, Mode=OneWay}" Value="False"> <DataTrigger.Setters> <Setter Property="Content" Value="No"/> </DataTrigger.Setters> </DataTrigger> </Style.Triggers> </Style> </Button.Style> </Button> Why is that? * By "works" I mean the Content gets updated whenever I click the button. TIA

    Read the article

  • C# Thread-safe Extension Method

    - by Wonko the Sane
    Hello All, I may be waaaay off, or else really close. Either way, I'm currently SOL. :) I want to be able to use an extension method to set properties on a class, but that class may (or may not) be updated on a non-UI thread, and derives from a class the enforces updates to be on the UI thread (which implements INotifyPropertyChanged, etc). I have a class defined something like this: public class ClassToUpdate : UIObservableItem { private readonly Dispatcher mDispatcher = Dispatcher.CurrentDispatcher; private Boolean mPropertyToUpdate = false; public ClassToUpdate() : base() { } public Dispatcher Dispatcher { get { return mDispatcher; } } public Boolean PropertyToUpdate { get { return mPropertyToUpdate; } set { SetValue("PropertyToUpdate", ref mPropertyToUpdate, value; } } } I have an extension method class defined something like this: static class ExtensionMethods { public static IEnumerable<T> SetMyProperty<T>(this IEnumerable<T> sourceList, Boolean newValue) { ClassToUpdate firstClass = sourceList.FirstOrDefault() as ClassToUpdate; if (firstClass.Dispatcher.Thread.ManagedThreadId != System.Threading.Thread.CurrentThread.ManagedThreadId) { // WHAT GOES HERE? } else { foreach (var classToUpdate in sourceList) { (classToUpdate as ClassToUpdate ).PropertyToUpdate = newValue; yield return classToUpdate; } } } } Obviously, I'm looking for the "WHAT GOES HERE" in the extension method. Thanks, wTs

    Read the article

  • Why Finalize method not allowed to override

    - by somaraj
    I am new to .net ..and i am confused with the destructor mechanism in C# ..please clarify In C# destructors are converted to finalize method by CLR. If we try to override it (not using destructor ) , will get an error Error 2 Do not override object.Finalize. Instead, provide a destructor. But it seems that the Object calss implementation in mscorlib.dll has finalize defined as protected override void Finalize(){} , then why cant we override it , that what virtual function for . Why is the design like that , is it to be consistent with c++ destructor concept. Also when we go to the definition of the object class , there is no mention of the finalize method , then how does the hmscorlib.dll definition shows the finalize funtion . Does it mean that the default destructor is converted to finalize method. public class Object { public Object(); public virtual bool Equals(object obj); public static bool Equals(object objA, object objB); public virtual int GetHashCode(); public Type GetType(); protected object MemberwiseClone(); public static bool ReferenceEquals(object objA, object objB); public virtual string ToString(); }

    Read the article

  • Invalid Cast Exception in ASP.NET but not in WinForms

    - by Shadow Scorpion
    I have a problem in this code: public static T[] GetExtras <T>(Type[] Types) { List<T> Res = new List<T>(); foreach (object Current in GetExtras(typeof(T), Types)) { Res.Add((T)Current);//this is the error } return Res.ToArray(); } public static object[] GetExtras(Type ExtraType, Type[] Types) { lock (ExtraType) { if (!ExtraType.IsInterface) return new object[] { }; List<object> Res = new List<object>(); bool found = false; found = (ExtraType == typeof(IExtra)); foreach (Type CurInterFace in ExtraType.GetInterfaces()) { if (found = (CurInterFace == typeof(IExtra))) break; } if (!found) return new object[] { }; foreach (Type CurType in Types) { found = false; if (!CurType.IsClass) continue; foreach (Type CurInterface in CurType.GetInterfaces()) { try { if (found = (CurInterface.FullName == ExtraType.FullName)) break; } catch { } } try { if (found) Res.Add(Activator.CreateInstance(CurType)); } catch { } } return Res.ToArray(); } } When I'm using this code in windows application it works! But I cant use it on ASP page. Why?

    Read the article

  • Bootstrapping in CloudFormation with Autoscale

    - by PapelPincel
    My CloudFormation template creates an autoscale group and bootstrap it with utility script /opt/aws/bin/cfn-init. When I remove the bootstrap part out of my template the, autoscale get created without any problem, but I add it the CloudFormation Stack fails and add line in /var/log/cloud-init.log : Error: AutoScalingGroupName does not specify any metadata The line above appens right after the following command : /opt/aws/bin/cfn-init --verbose --configsets orderedConfig --region us-east-1 --stack AS15 --resource AutoScalingGroupName --access-key XXXXXXXXXXXXX --secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Digging a little bit deeper, in cfn-init I added the following lines the point where it exit: from pprint import pprint pprint(vars(detail)) and I get the following trace when running the previous cfn-init command : {'_description': None, '_lastUpdated': datetime.datetime(2012, 7, 12, 14, 52, 42), '_logicalResourceId': u'AutoScalingGroupName', '_metadata': None, '_physicalResourceId': u'AS15-AutoScalingGroupName-HNPOXXXXXXXX', '_resourceStatus': u'CREATE_COMPLETE', '_resourceStatusReason': None, '_resourceType': u'AWS::AutoScaling::AutoScalingGroup', '_stackId': u'arn:aws:cloudformation:us-east-1:XXXXXXXXXXXXX:stack/AS15/XXXXXXXX-cc30-11e1-XXXXXX-XXXXXXXXXX', '_stackName': u'AS15'} As you can see, the metadata field is empty and that's the reason why it fails to create the stack. Is there any known side effects for cfn-init when used with autoscale ?

    Read the article

  • How to serialize object containing NSData?

    - by AO
    I'm trying to serialize an object containing a number of data fields...where one of the fields is of datatype NSData which won't serialize. I've followed instructions at http://www.isolated.se but my code (see below) results in the error "[NSConcreteData data]: unrecognized selector sent to instance...". How do I serialize my object? Header file: @interface Donkey : NSObject<NSCoding> { NSString* s; NSData* d; } @property (nonatomic, retain) NSString* s; @property (nonatomic, retain) NSData* d; - (NSData*) serialize; @end Implementation file: @implementation Donkey @synthesize s, d; static NSString* const KEY_S = @"string"; static NSString* const KEY_D = @"data"; - (void) encodeWithCoder:(NSCoder*)coder { [coder encodeObject:self.s forKey:KEY_S]; [coder encodeObject:self.d forKey:KEY_D]; } - (id) initWithCoder:(NSCoder*)coder; { if(self = [super init]) { self.s = [coder decodeObjectForKey:KEY_STRING]; self.d [coder decodeObjectForKey:KEY_DATA]; } return self; } - (NSData*) serialize { return [NSKeyedArchiver archivedDataWithRootObject:self]; } @end

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • Does upgrading RAM causes increase in Graphic card's share?

    - by A.S.
    I have asked this question on Ask Ubuntu, and I was suggested to Upgrade RAM from most voted answers. But I got a point about my graphics card. Since, I can upgrade RAM and not graphics card, Does upgrading RAM also cause graphics memory to increase. To clear the point: My specs are given below: Laptop : Lenovo 3000 Y410. (bought in 2008 October) RAM: 1 GB (DDR2) External Graphics (Dedicated): N/A Internal Graphics (Shared): 256 MB Graphics Chipset: Intel GMA X3100 My Question is: If I increase my RAM to 3 GB, will it increase graphics cards share of the Memory. In other word, If graphics card shares 256 MB in 1GB RAM, will it share more, when I upgrade the RAM into 2GB or more ? Authentic resource link will be much appreciated I have recently known that, My chipset GMA X3100 can address 384 MB of RAM. So the question.

    Read the article

  • Zabbix machine is going crazy with HD writes!

    - by gshankar
    I recently installed Zabbix on a Ubuntu box I had sitting around. It's only monitoring 2 servers but I've noticed that it's continuously smashing the HD with writes. I don't remember Zabbix being this resource heavy when I've used it in the past... Any ideas on why this is happening and what I can do about it? Running iotop gives me this: 1710 be/4 mysql 0.00 B/s 102.12 K/s 0.00 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 1723 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld I'm pretty sure it's Zabbix that's causing all that mysql activity as it's the only thing which uses mysql which is running on the box...

    Read the article

  • Run a .java file using ProcessBuilder

    - by David K
    I'm a novice programmer working in Eclipse, and I need to get multiple processes running (this is going to be a simulation of a multi-computer system). My initial hackup used multiple threads to multiple classes, but now I'm trying to replace the threads with processes. From my reading, I've gleaned that ProcessBuilder is the way to go. I have tried many many versions of the input you see below, but cannot for the life of me figure out how to properly use it. I am trying to run the .java files I previously created as classes (which I have modified). I eventually just made a dummy test.java to make sure my process is working properly - its only function is to print that it ran. My code for the two files are below. Am I using ProcessBuilder correctly? Is this the correct way to read the output of my subprocess? Any help would be much appreciated. David primary process package Control; import java.io.*; import java.lang.*; public class runSPARmatch { /** * @param args */ public static void main(String args[]) { try { ProcessBuilder broker = new ProcessBuilder("javac.exe","test.java","src\\Broker\\"); Process runBroker = broker.start(); Reader reader = new InputStreamReader(runBroker.getInputStream()); int ch; while((ch = reader.read())!= -1) System.out.println((char)ch); reader.close(); runBroker.waitFor(); System.out.println("Program complete"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } subprocess package Broker; public class test { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println("This works"); } }

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >