Search Results

Search found 14924 results on 597 pages for 'kernel mode'.

Page 189/597 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Ninject - initialise objects

    - by James Lin
    Hi guys, I am new to ninject, I am wondering how I can run custom initizlisation code when constructing the injected objects? ie. I have a Sword class which implements IWeapon, but I want to pass an hit point value to the Sword class constructor, how do I achieve that? Do I need to write my own provider? A minor question, IKernel kernel = new StandardKernel(new Module1(), new Module2(), ...); what is the actual use of having multiple modules in Kernel? I sorta understand it, but could someone give me a formal explaination and use case? Thanks a lot! James

    Read the article

  • Time to ignore IDisposable?

    - by Mystagogue
    Certainly we should call Dipose() on IDisposable objects as soon as we don't need them (which is often merely the scope of a "using" statement). If we don't take that precaution then bad things, from subtle to show-stopping, might happen. But what about "the last moment" before process termination? If your IDisposables have not been explicitly disposed by that point in time, isn't it true that it no longer matters? I ask because unmanaged resources, beneath the CLR, are represented by kernel objects - and the win32 process termination will free all unmanaged resources / kernel objects anyway. Said differently, no resources will remain "leaked" after the process terminates (regardless if Dispose() was called on lingering IDisposables). Can anyone think of a case where process termination would still leave a leaked resource, simply because Dispose() was not explicitly called on one or more IDisposables? Please do not misunderstand this question: I am not trying to justify ignoring IDisposables. The question is just technical-theoretical.

    Read the article

  • Best code structure for arcade games

    - by user280454
    Hi, I've developed a few arcade games so far and have the following structure in almost all of them: I have 1 file which contains a class called kernel with the following functions: init() welcome_screen() menu_screen() help_Screen() play_game() end_screen() And another file, called Game, which basically calls all these functions and controls the flow of the game. Also, I have classes for different characters, whenever required. Is this a good code structure, or should I try having different files(classes) for different functions like welcome screen, playing, help screen, etc? As in, instead of including all the code for these things in 1 file, should I be having different classes for each one of them? The only problem I think it might cause is that I would need certain variables like score, characters, etc which are common to all of them, that's why I have all these functions in a Kernel class so that all these functions can use these variables.

    Read the article

  • How to Use Windsor without Property Injection

    - by Grandpappy
    I am attempting to use the WindsorContainer, but I need to turn off Property Injection for the entire container. Here's the code I was trying, but it doesn't seem to have an effect. this.Container = new WindsorContainer(new XmlInterpreter(new ConfigResource("Dependencies"))); this.Container.Kernel.ComponentModelBuilder.RemoveContributor( this.Container.Kernel.ComponentModelBuilder.Contributors.OfType<Castle.MicroKernel.ModelBuilder.Inspectors.PropertiesDependenciesModelInspector>().Single() ); I'd rather not use the Attribute [DoNotWire] on my properties because I don't want my application to have to know it's running with Windsor. Any help would be greatly appreciated.

    Read the article

  • Laplacian of Gaussian

    - by Don
    I am having trouble implementing a LoG kernel. I am trying to implement 9x9 kernal with theta = 1.4 as shown in this link http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm. However, I am having difficulty with the formula itself.For whatever values I input into the formula, I don't get any of the values in a 9x9 LoG kernel with theta = 1. 4. If someone can provide an example of how they got one of the big values ie -40 or -23, or the code to implement it, It'd be greatly appreciated. Thank you

    Read the article

  • What happens after a packet is captured?

    - by Rayne
    Hi all, I've been reading about what happens after packets are captured by NICs, and the more I read, the more I'm confused. Firstly, I've read that traditionally, after a packet is captured by the NIC, it gets copied to a block of memory in the kernel space, then to the user space for whatever application that then works on the packet data. Then I read about DMA, where the NIC directly copies the packet into memory, bypassing the CPU. So is the NIC - kernel memory - User space memory flow still valid? Also, do most NIC (e.g. Myricom) use DMA to improve packet capture rates? Secondly, does RSS (Receive Side Scaling) work similarly in both Windows and Linux systems? I can only find detailed explanations on how RSS works in MSDN articles, where they talk about how RSS (and MSI-X) works on Windows Server 2008. But the same concept of RSS and MSI-X should still apply for linux systems, right? Thank you. Regards, Rayne

    Read the article

  • How to find which type of system call is used by a program

    - by bala1486
    I am working on x86_64 machine. My linux kernel is also 64 bit kernel. As there are different ways to implement a system call (int 80, syscall, sysenter), i wanted to know what type of system call my machine is using. I am newbie to linux. I have written a demo program. include int main() { getpid(); return 0; } getpid() does one system call. Can anybody give me a method to find which type of system call will be used by my machine for this program.. Thank you....

    Read the article

  • Partial compilation of openwrt project

    - by yosig81
    I would like to get an idea or reference to compile only subset on the openwrt project. i am aware of the menuconfig utility but this is not enough for my goal. i would like to compile only the tool-chain (binutils + gcc + glibc) for a specific target (ar71xx) and also the kernel. now, after looking in the makefiles etc, i have noticed that most of the work in actually patching the toolchain and the kernel and then compile it. is there any option to stop build process after the patching so i can have only the source code patched and i can write my own make file to compile it?

    Read the article

  • Ruby and Forking

    - by Cory
    Quick question about Ruby forking - I ran across a bit of forking code in Resque earlier that was sexy as hell but tripped me up for a few. I'm hoping for someone to give me a little more detail about what's going on here. Specifically - it would appear that forking spawns a child (expected) and kicks it straight into the 'else' side of my condition (less expected. Is that expected behavior? A Ruby idiom? My IRB hack here: def fork return true if @cant_fork begin if Kernel.respond_to?(:fork) Kernel.fork else raise NotImplementedError end rescue NotImplementedError @cant_fork = true nil end end def do_something puts "Starting do_something" if foo = fork puts "we are forking from #{Process.pid}" Process.wait else puts "no need to fork, let's get to work: #{Process.pid} under #{Process.ppid}" puts "doing it" end end do_something

    Read the article

  • Unknow Linking Error

    - by Nathan Campos
    I'm developing my own OS, but for this I need to touch on linking, then I've done this linking script to build it: ENTRY (loader) SECTIONS{ . = 0x00100000 .text : { *(.text) } .bss : { sbss = .; *(COMMON) *(.bss) ebss = .; } } .data ALIGN (0x1000) : { start_ctors = .; *(.ctor*) end_ctors = .; start_dtors = .; *(.dtor*) end_dtors = .; *(.data) } But when I try to link the things, I got some errors $ ld -T linker.ld -o kernel.bin loader.o kernel.o ld:linker.ld:5: syntax error $ What can I do?

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • ptrace'ing of parent process

    - by osgx
    Hello Can child process use the ptrace system call to trace its parent? Os is linux 2.6 Thanks. upd1: I want to trace process1 from "itself". It is impossible, so I do fork and try to do ptrace(process1_pid, PTRACE_ATTACH) from child process. But I can't, there is a strange error, like kernel prohibits child from tracing their parent processes UPD2: such tracing can be prohibited by security policies. Which polices do this? Where is the checking code in the kernel? UPD3: on my embedded linux I have no errors with PEEKDATA, but not with GETREGS: child: getregs parent: -1 errno is 1, strerror is Operation not permitted errno = EPERM

    Read the article

  • Contextual bindings with Ninject 2.0

    - by Przemaas
    In Ninject 1.0 I had following binding definitions: Bind<ITarget>().To<Target1>().Only(When.Context.Variable("variable").EqualTo(true)); Bind<ITarget>().To<Target2>(); Given such bindings I had calls: ITarget target = kernel.Get<ITarget>(With.Parameters.ContextVariable("variable", true)); ITarget target = kernel.Get<ITarget>(With.Parameters.ContextVariable("variable", false)); First call was resolved to instance of Target1, second call was resolved to instance of Target2. How to translate this into Ninject 2.0?

    Read the article

  • What knowledge/expertize is required to port android to custom arm device ?

    - by Sunny
    Hi Friends, I am working on a system on which currently linux kernel and microwindows windowing system is running. Code of current linux system drivers is available to me. I want to port android on it, just as a hobby project. can you please tell me what all understanding of linux-kernel is required to port it? Please give me references (Books, Tutorials) to build-up understandings. Thanks, Sunny. P.S. I have basic understanding of linux. Configuration of device is 450 Mhz ARM9, 64 MB RAM, 256 MB NAND 480x272 resolution.

    Read the article

  • How to use Device Emulator to debug my WinCE application

    - by ame
    I am working with an MFC application built on a WinCE platform in Visual Studio. I need to debug this application and I cannot do it using KITL and the hardware. I tried to use Device Emulator for this: I started a new Platform Builder Project (PDA Device, enterprise webpad). I built it after ensureing that KITL was enabled and so was kernel debugger. Once built, i set the target connectivity options as ce device, download and transport set to Device Emulator and Debugger is KdStub. Once I hit Attach to Device, the doload to target window pops up and then the RelDir window also does. However nothing happens after this and in the output window it says: PB Debugger The Kernel Debugger is waiting to connect with target. Please guide me on what I need to do to debug my application. Thankyou!

    Read the article

  • Implementation of APIs on diferent platforms

    - by b-gen-jack-o-neill
    OK, this is basicly just about any non-default OS API running on all different OS. But for my example let´s consider platform Windows, API SDL (Simple DirectMedia Layer). Actually this question came to my mind when I was reading about SDL. Originally, I thought that on Windows (and basicly any other OS) you must use OS API to make certain actions, like wrtiting to screen, creating window and so on, becouse that API knows what kernel calls and system subroutines calls it has to do. But when I read about SDL, I surprised me, becouse, you cannot make computer to do anything more than OS can, since you cannot acess HW directly, only thru OS API, from Console allocation to DirectX. So, my question actually is, how does this not-default-OS APIs work? Do they use (wrap) original system API (like MFC wraps win32 api)? Or, do they actually have direct acess to Windows kernel? Or is there any third, way in between? Thanks.

    Read the article

  • Cuda program results are always zero in HW, correct in EMU??

    - by Orion Nebula
    Hi all! I am having a weird problem .. I have written a CUDA code which executes correctly in emulation and all results show up.. however, when executed on hardware "G210" .. the results in the result memory are always 0 I am passing two vectors to the kernel, one with random variables the other is initialized to zero, the code copies the first vector to shared memory, does some swapping and other operations and then writes back the results on the second vector (the one with the initial 0's) I am using double precision, the -arch sm13 flag is used, all memory allocation also use sizeof(double) .. I have checked if the kernel is invoked, it does .. so no problems here .. the cudaMemCpy has no problems .. what could be the problem .. :( why would it work in emulation but not on HW I am quite confused .. any ideas?

    Read the article

  • Creating a PHP call home "time bomb" to protect my interests

    - by RC
    Hi everyone, I produced a PHP web app for a client some months back that is hosted on their own server. I have still not been paid for this work, and they are giving me the runaround. It turns out that I still have remote admin access to their server, so I can make code changes. What I was thinking of doing was to move the core kernel off-site onto one of my own servers, and program in some kind of callback or include that gets the kernel (critical functions) from my server. Give it two weeks or so for their backups to catch this change, and then pull the plug and exercise leverage. If I do this, they will pay immediately, because the site is a critical one for a very large and influential client of theirs. What is the most effective and easiest way of doing this? What code do I use? Thanks for any pointers.

    Read the article

  • Why can't I inject value null with Ninjects ConstructorArgument?

    - by stiank81
    When using Ninjects ConstructorArgument you can specify the exact value to inject to specific parameters. Why can't this value be null, or how can I make it work? Maybe it's not something you'd like to do, but I want to use it in my unit tests.. Example: public class Ninja { private readonly IWeapon _weapon; public Ninja(IWeapon weapon) { _weapon = weapon; } } public void SomeFunction() { var kernel = new StandardKernel(); var ninja = kernel.Get<Ninja>(new ConstructorArgument("weapon", null)); }

    Read the article

  • Symfony 2 - UrlGenerator::doGenerate is called before listener

    - by guyaloni
    I want to add to the context a parameter, so when login is called I can use it in the route (similar to _locale). I can add this piece of code in HttpUtils.php (as resetLocale), but i don't find it very clean. The reason I need it is the firewall redirection to the login controller, which I would like to have in its route a customized parameter. My problem is that my listener is called after UrlGenerator::doGenerate is called, so I get a MissingMandatoryParametersException. Here is my config.yml relevant code: services: mycompany.demobundle.listener.request: class: MyCompany\DemoBundle\RequestListener arguments: [@router, @security.context] tags: - { name: kernel.event_listener, event: kernel.request, method: onKernelRequest } Any idea???

    Read the article

  • C++ template parameter/class ambiguity

    - by aaa
    hello. while testing with different version of g++, the following problem came up template<class bra> struct Transform<bra, void> : kernel::Eri::Transform::bra { static const size_t ni = bra::A::size; bra::A is interpreted as kernel::Eri::Transform::bra::A, rather than template argument by g++ 4.1.2. on the other hand, g++ 4.3 gets it right. what should be correct behavior according to standard? Meanwhile, I refactor slightly to make problem go away.

    Read the article

  • What is the preferred method of accessing WWW::Mechanize responses?

    - by sid_com
    Hello! Are both of these versions OK or is one of them to prefer? #!/usr/bin/env perl use strict; use warnings; use WWW::Mechanize; my $mech = WWW::Mechanize->new(); my $content; # 1 $mech->get( 'http://www.kernel.org' ); $content = $mech->content; say $content; # 2 my $res = $mech->get( 'http://www.kernel.org' ); $content = $res->content; say $content;

    Read the article

  • Diagonal Output of Assembly programe

    - by Yousuf Umar
    i have this assembly programe and i want to diagonal ouptut of this programe but i dont know how to put tabspace in assembly section .text global _start ;must be declared for using gcc _start: ;tell linker entry point mov edx, len ;message length mov ecx, msg ;message to write mov ebx, 1 ;file descriptor (stdout) mov eax, 4 ;system call number (sys_write) int 0x80 ;call kernel mov eax, 1 ;system call number (sys_exit) int 0x80 ;call kernel section .data msg db 'Y',10,'O',10,'U',10,'S',10,'U',10,'F' ;our dear string len equ $ - msg ;length of our dear string output of my programe is Y O U S U F output shoud like this Y O U S U F or is there any other way to write this programe and get this output

    Read the article

  • Windsor + NHibernate + ISession + MVC

    - by dbones
    Hi I am trying to get Windsor to give me an instance ISession for each request, which should be injected into all the repositories Here is my container setup container.AddFacility<FactorySupportFacility>().Register( Component.For<ISessionFactory>().Instance(NHibernateHelper.GetSessionFactory()).LifeStyle.Singleton, Component.For<ISession>().LifeStyle.Transient .UsingFactoryMethod(kernel => kernel.Resolve<ISessionFactory>().OpenSession()) ); //add to the container container.Register( Component.For<IActionInvoker>().ImplementedBy<WindsorActionInvoker>(), Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHibernateRepository<>)) ); Its based upon a StructureMap post here http://www.kevinwilliampang.com/2010/04/06/setting-up-asp-net-mvc-with-fluent-nhibernate-and-structuremap/ however, when this is run, a new Session is created for every object it is injected too. what am I missing? thanks in advanced (FYI the NHibernateHelper, sets up the config for Nhib)

    Read the article

  • is right to implement a business logic in the type binding DI framwork?

    - by Martino
    public IRedirect FactoryStrategyRedirect() { if (_PasswordExpired) { return _UpdatePasswordRedirectorFactory.Create(); } else { return _DefaultRedirectorFactory.Create(); } } This strategy factory method can be replaced with type binding and when clause: Bind<IRedirect>.To<UpdatePasswordRedirector>.When(c=> c.kernel.get<SomeContext>().PasswordExpired()) Bind<IRedirect>.To<DefaultRedirector>.When(c=> not c.kernel.get<SomeContext>().PasswordExpired()) I wonder which of the two approaches is the more correct. What are the pros and cons. Especially in the case in which the logic is more complex with more variables to test and more concrete classes to return. is right to implement a business logic in the binding?

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >