Search Results

Search found 17160 results on 687 pages for 'built in types'.

Page 149/687 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

  • How to install Windows with no CD drive boot option in BIOS?

    - by Kris Hollenbeck
    I have a new computer which I built from scratch and I am trying to install a copy of Windows Vista on it. I am able to get to the BIOS and change the boot options which are as follows.. -Built-in EFI Shell -SATA: ST31000528AS I have searched around for and everything I find says to boot from the CD rom. However, as you can see. That is not an option for me. So I am wondering if there is another way around this? Is it possible to boot the Disk from the EFI Shell? Any help is much appreciated. Thanks EDIT: I have tried this.. http://technet.microsoft.com/en-us/library/dd744321%28v=ws.10%29.aspx UPDATE: I managed to make my USB bootable via the BIOS and I have copied my windows Vista disk onto my USB via drag and drop. However I am still not able to get the windows install to start. Also I have tried booting it from the EFI shell using the following command.. blk6: blk6:\> \EFI\BOOT\BOOTX64.EFI Still no luck..

    Read the article

  • 2012 R2 services will not start after promotion to Domain Controller

    - by Cybersylum
    Having a peculiar issue promoting a Windows 2012 R2 server in a domain at 2003 domain/forest functional level. Built a new 2012 R2 server, added the following software (labtech, appassure, eset A/V, & Teamviewer). It activated and appeared to be working fine. I added the Active Directory Domain Services role, and completed the configuration (Domain/Forest Prep, and DC promotion). All appeared to go well. I rebooted the server, and that's where the peculiar stuff began. I noticed the server indicated it needed activated again; but would not accept the key. I verified the key was good. That's when I noticed the Software Protection service (as well as many other core services - Base Filtering engine, DHCP client, firewall, etc) would not start. The error message for all of them was "Access Denied". I called MS, and they wanted to troubleshoot at a service level. Their fix was to use procmon and identify the resource that needed permissions (registry key, file or folder) and add "everyone" with full control). That got the services to start; but the problem re-appeared after a reboot. Thinking the issue might have been with the anti-virus package during the promotion process, I rebuilt the DCs from scratch and removed the metadata from AD (as I could not demote the machines "rpc server unavailble"). I tried to promote the newly built machines again. The only changes to the brand new machines being critical updates. Again the promotion appeared to work fine; but upon reboot (and a long wait to allow replication to occur) similar problems began to re-appear. I have verified that the schema updates are correct (schema version is 69 - for Windows 2012 R2). I am not finding much about this issue through my own searches, so I thought I would post this to see if anyone else has seen anything similar...

    Read the article

  • How to add extensions to a lot of files using content of each file?

    - by v8media
    I've got over 10,000 files that don't have extensions from older versions of the Mac OS. They're extremely nested, and they also have all sorts of strange formatting and characters. They don't have file types or creator codes attached to them any longer. A great deal of these files have text in the file that will let me determine extensions (for example Word.Document.8 is in every file created by that version of Word, and Excel.Sheet.8 in every file created with that version of Excel). I found a script that looks like it would work for one of these file types at a time, but it erases parts of filenames after nefarious characters, which is not good. find . -type f -not -name "." -print0 |\ xargs -0 file |\ grep 'Word.Document.8' |\ sed 's/:.*//' |\ xargs -I % echo mv % %.doc So, two questions from that: One is, should I clean the characters in the filenames first, or programmatically deal with those in the script in order to leave them the same? As long as I lose no information from the filenames, I don't see a problem cleaning out slashes and other problem characters. Also, if I clean the filenames, there are likely to be duplicates, so any cleaning script would have to add something like "-1" before the extension to make sure nothing gets lost. 2nd question is how do I change the script so that it will look for more than one file type at the same time and give each the proper extension? I'm not tied to this script, but it is understandable, which is a pro. Mac OS X 10.6 is installed on this file server, but I've got access to any recent versions of OS X. Thanks, Ian

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • System With Two Network Adapters

    - by Synetech inc.
    Hi, My system has a NIC (Marvell Yukon) built-into the motherboard, but I also have a D-Link (RealTek) card. I figure that using the D-Link and disabling the Marvell makes the most sense, though I'm wondering if maybe the built-in one has better throughput (not that my Internet connection is so fast). Also, I'm wondering about the merits of using both at the same time. My router has four ports and I have experimented with enabling and plugging both NICs into the router. I was able to connect to the Internet, but the pattern of usage seemed irregular (which adapter was chosen for the transfer and any given point). I also considered bridging the two, but am having difficulty in finding out what exactly creating network bridge does in the context of the Windows Network Connections window. I am familiar with the concept of connecting networks, so it seems to me that birding two connections on the same segment is pointless at best (and can cause problems like loops?) Does anyone have any tips on what to do if a system has more than one NIC and any clarification on the bridge option? Thanks a lot.

    Read the article

  • CD Drive not discovered

    - by user1009073
    I have a self built computer. it uses a P6T Deluxe motherboard, which has both SATA and IDE ports. This was built several years ago, and had an IDE CD/DVD drive. This drive started going bad (would not burn CDs correctly), so I decided to replace it. I had difficultly finding an IDE DVD drive, so I bought a SATA DVD drive. I opened the comnputer, took out the old DVD drive. I left the IDE cable in place, connected to the motherboard, but it is not connected to any drives. I hooked up the new DVD drive, both power and with a SATA data cable (SATA port 3 if I recall). (Sony Optiarc 24x , Newegg URL: http://www.newegg.com/Product/Product.aspx?Item=N82E16827118067 ) When I power on my computer, the drive does NOT show up in Explorer. I can hit the DVD eject button, and the drive will open up, so I know it at least is getting power. I thought, maybe something in the BIOS. When I go to BIOS, boot devices, it shows (1) floppy, (2) my hard drive (3) ATAPI CD Drive. The only other possible BIOS option I could find was uder 'Storage Configuration'. Configure Storage as: My setting is RAID, since I am using two drives in a RAID configuration. Other options were IDE and ACHI. Other than trying to find an IDE DVD drive, is there anything else I can try? The drive does not show up at all in Windows Explorer. I did put in a CD thinking that might help, but nothing happened. Thanks, GS

    Read the article

  • Get Matrox Millenium video card working in Ubuntu 9.10

    - by wcoenen
    I have installed Ubuntu 9.10 on an old PC and it is mostly working, except for some heavy drawing defects that show up whenever I start dragging a window or scrolling inside a window or menu. It looks like the video driver copies the rectangle being moved to the wrong location. I have taken a look in /var/log/Xorg.0.log and the following line shows the detected video card: (--) PCI:*(0:0:8:0) 102b:0519:0000:0000 Matrox Graphics, Inc. MGA 2064W [Millennium] rev 1, Mem@ 0xf9800000/16384, 0xfb000000/8388608, BIOS @0x????????/65536 (==) Using default built-in configuration (30 lines) (==) --- Start of built-in configuration --- Section "Device" Identifier "Builtin Default mga Device 0" Driver "mga" EndSection How do I fix the drawing defects? It turned out that the 24 bit color depth (automatically selected by ubuntu 9.10) was the problem; apparantly the mga driver doesn't handle this well for cards with little memory. I took the following steps to resolve the issue (you can skip the first three steps if you already have a semi-working xorg.conf file): Reboot ubuntu in recovery mode, to get a root console without X running. Run Xorg -configure to generate a xorg.conf.new file Copy the file to /etc/X11/xorg.conf with cp xorg.conf.new /etc/X11/xorg.conf (assuming it didn't exist yet; that's why I generated it) Open the new config file with sudo nano /etc/X11/xorg.conf and make sure the screen section is configured for 16 bit color depth like this: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 16 Modes "1024x768" EndSubSection EndSection I can't guarantee those were the only important changes I made - I tried a few things in my attempts to create a valid xorg.conf file. But I'm pretty sure that the screen section was the important part.

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • System With Two Network Adapters [closed]

    - by Synetech inc.
    Hi, My system has a NIC (Marvell Yukon) built-into the motherboard, but I also have a D-Link (RealTek) card. I figure that using the D-Link and disabling the Marvell makes the most sense, though I'm wondering if maybe the built-in one has better throughput (not that my Internet connection is so fast). Also, I'm wondering about the merits of using both at the same time. My router has four ports and I have experimented with enabling and plugging both NICs into the router. I was able to connect to the Internet, but the pattern of usage seemed irregular (which adapter was chosen for the transfer and any given point). I also considered bridging the two, but am having difficulty in finding out what exactly creating network bridge does in the context of the Windows Network Connections window. I am familiar with the concept of connecting networks, so it seems to me that birding two connections on the same segment is pointless at best (and can cause problems like loops?) Does anyone have any tips on what to do if a system has more than one NIC and any clarification on the bridge option? Thanks a lot.

    Read the article

  • How do I rsync an entire folder based on the existence of a specific file type in that folder

    - by inquam
    I have a server set up that receives movies to a folder. I then serve these movies using DLNA. But in the initial folder where they end up all kind of files end up. Pictures, music, documents etc. I thought I'd fix this by running the following script inside that folder rsync -rvt --include='*/' --include='*.avi' --include='*.mkv' --exclude='*' . ../Movies/ This works and scans the given folder and moves all the found movies of the given extension types to the Movies folder. But I wonder if there is anyway to tell rsync to if a folder if found that includes a movie of the given extension types, sync the entire folder. Including other files such as .srt. This is to make it easier for me to get subtitles moved along with the movie. I have a solution figured out via a script made in php (yea, I actually do most of my scripting in linux using php... just a habbit that stuck a long time ago). But if rsync can handle it from the start that would be super. Also, I have noticed that this line of rsync actually copies all the root folders in the given folder. If no movie is in the folder it will create an empty folder. How do I prevent rsync from doing this... and saving me the trouble of deleting all folder in Movies that are empty.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Remotely port forward/launch process or a client-less remote desktop app?

    - by DC177E
    I have an XP box running Logmein at a remote location behind a linksys router, which was running well for a whole of four days, until we had a power failure. Our ISP gave us a new IP, the machine restarted, and logmein did not autorun (or, at least, it did not automatically sign in), and our service (which may or may not be a Minecraft server with non-backed-up save files) also did not run upon startup. Logmein does not register the new IP (it still displays the old one). I have a DDNS updater service, so I do know the new dynamic address. I have tried using the built in XP remote desktop service, but, as with almost all non-cloud-based remote desktop services, it requires a port forward. Thus, I would appreciate it if anyone has any ideas as to: A: Any way of accessing our router remotely to forward the remote desktop port. I've seen the Remote Management option (forwarding the setup page to port 8080), but I do not have it enabled. I've tried UPnP, but again, the setup page for our router is not forwarded. B: Any way of remotely launching a process that does not require port forwarding (or uses ports 255XX, 18XXX, or 9000.), such as a remote console service built into XP. I realize this is a near impossibility. C: A Way to remotely start logmein, and sign in, which is likely a definite impossibility. Sorry if this is too specific for Stackexchange, or if I've put it into the wrong section (is SuperUser the correct place for this?). Ideas would, again be much appreciated, as shot-in-the-dark-like this may be.

    Read the article

  • What should I use to ping multiple IPs and get notified of time outs?

    - by HumanVirus
    I've been using MultiPing to ping hundreds of IPs (from access points and such) and check their performance (packet loss, latency) and uptime. The program is very easy to use, but I was wondering if someone could recommend me something that would work better and that would also work in Linux. The features I'm looking for are: Notification Types: At least desktop notifications and SMS, but it would be great if it also had e-mail, IM, or other types of notifications. (MultiPing has some of these, but they don't work too well.) Being notified about the root problem only: Since some devices are dependent on others, I'd like to be notified only about the root problem. E.g. Let's say I have A[x.x.x.222]B[x.x.x.33C[x.x.x.44]D[x.x.x.55], and B goes down, therefore C and D will also be down. Is it possible to get a notification only about B being down? Light on resources. Ideally multiplatform or at least available for both Linux and Windows. I've heard about Nagios and Shinken being used for monitoring. Would you recommend that I use something of the sort or would that be too much for my needs? If using Nagios, Shinken, or similar software is recommended, can anyone tell me what sites I should go to or what books I should get that would be good for someone who is totally new at this? I'd appreciate any suggestions.

    Read the article

  • Reporting SQL Vulnerability [migrated]

    - by Ciaran87Bel
    My first post here so i'll hopefully keep it simple. I have just finished building a CMS targeted at a certain industry and built a test site to see how everything works. Anyway I wrote a program to check for sql injection vulnerabilities and the program followed a blog link to an external website. The program discovered that the external site had a massive vulnerability that left it open to practically anyone who could then access every bit of data on their MYSQL Server and run queries etc. The thing is this external site is the brand leader in their industry and do millions upon millions of sales per annum. I have tried contacting them to let them know and even went as far as contacting the company that built their platform but I was pretty much brushed off and haven't heard back from them. Their database would contain the details of hundreds of thousands of customers and all their data. I could easily make myself site admin etc in a few seconds but they won't listen to me even though I have offered to share the vulnerability with them and help in anyway I can. Is there anything else I can do because it is one of the biggest security risks I have ever personally come across. Is there any other steps I should take to report this? Thanks

    Read the article

  • How to use Ninject with XNA?

    - by Rosarch
    I'm having difficulty integrating Ninject with XNA. static class Program { /** * The main entry point for the application. */ static void Main(string[] args) { IKernel kernel = new StandardKernel(NinjectModuleManager.GetModules()); CachedContentLoader content = kernel.Get<CachedContentLoader>(); // stack overflow here MasterEngine game = kernel.Get<MasterEngine>(); game.Run(); } } // constructor for the game public MasterEngine(IKernel kernel) : base(kernel) { this.inputReader = kernel.Get<IInputReader>(); graphicsDeviceManager = kernel.Get<GraphicsDeviceManager>(); Components.Add(kernel.Get<GamerServicesComponent>()); // Tell the loader to look for all files relative to the "Content" directory. Assets = kernel.Get<CachedContentLoader>(); //Sets dimensions of the game window graphicsDeviceManager.PreferredBackBufferWidth = 800; graphicsDeviceManager.PreferredBackBufferHeight = 600; graphicsDeviceManager.ApplyChanges(); IsMouseVisible = false; } Ninject.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Ninject.Modules; using HWAlphaRelease.Controller; using Microsoft.Xna.Framework; using Nuclex.DependencyInjection.Demo.Scaffolding; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; namespace HWAlphaRelease { public static class NinjectModuleManager { public static NinjectModule[] GetModules() { return new NinjectModule[1] { new GameModule() }; } /// <summary>Dependency injection rules for the main game instance</summary> public class GameModule : NinjectModule { #region class ServiceProviderAdapter /// <summary>Delegates to the game's built-in service provider</summary> /// <remarks> /// <para> /// When a class' constructor requires an IServiceProvider, the dependency /// injector cannot just construct a new one and wouldn't know that it has /// to create an instance of the Game class (or take it from the existing /// Game instance). /// </para> /// <para> /// The solution, then, is this small adapter that takes a Game instance /// and acts as if it was a freely constructable IServiceProvider implementation /// while in reality, it delegates all lookups to the Game's service container. /// </para> /// </remarks> private class ServiceProviderAdapter : IServiceProvider { /// <summary>Initializes a new service provider adapter for the game</summary> /// <param name="game">Game the service provider will be taken from</param> public ServiceProviderAdapter(Game game) { this.gameServices = game.Services; } /// <summary>Retrieves a service from the game service container</summary> /// <param name="serviceType">Type of the service that will be retrieved</param> /// <returns>The service that has been requested</returns> public object GetService(Type serviceType) { return this.gameServices; } /// <summary>Game services container of the Game instance</summary> private GameServiceContainer gameServices; } #endregion // class ServiceProviderAdapter #region class ContentManagerAdapter /// <summary>Delegates to the game's built-in ContentManager</summary> /// <remarks> /// This provides shared access to the game's ContentManager. A dependency /// injected class only needs to require the ISharedContentService in its /// constructor and the dependency injector will automatically resolve it /// to this adapter, which delegates to the Game's built-in content manager. /// </remarks> private class ContentManagerAdapter : ISharedContentService { /// <summary>Initializes a new shared content manager adapter</summary> /// <param name="game">Game the content manager will be taken from</param> public ContentManagerAdapter(Game game) { this.contentManager = game.Content; } /// <summary>Loads or accesses shared game content</summary> /// <typeparam name="AssetType">Type of the asset to be loaded or accessed</typeparam> /// <param name="assetName">Path and name of the requested asset</param> /// <returns>The requested asset from the the shared game content store</returns> public AssetType Load<AssetType>(string assetName) { return this.contentManager.Load<AssetType>(assetName); } /// <summary>The content manager this instance delegates to</summary> private ContentManager contentManager; } #endregion // class ContentManagerAdapter /// <summary>Initializes the dependency configuration</summary> public override void Load() { // Allows access to the game class for any components with a dependency // on the 'Game' or 'DependencyInjectionGame' classes. Bind<MasterEngine>().ToSelf().InSingletonScope(); Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); Bind<Game>().To<MasterEngine>().InSingletonScope(); // Let the dependency injector construct a graphics device manager for // all components depending on the IGraphicsDeviceService and // IGraphicsDeviceManager interfaces Bind<GraphicsDeviceManager>().ToSelf().InSingletonScope(); Bind<IGraphicsDeviceService>().To<GraphicsDeviceManager>().InSingletonScope(); Bind<IGraphicsDeviceManager>().To<GraphicsDeviceManager>().InSingletonScope(); // Some clever adapters that hand out the Game's IServiceProvider and allow // access to its built-in ContentManager Bind<IServiceProvider>().To<ServiceProviderAdapter>().InSingletonScope(); Bind<ISharedContentService>().To<ContentManagerAdapter>().InSingletonScope(); Bind<IInputReader>().To<UserInputReader>().InSingletonScope().WithConstructorArgument("keyMapping", Constants.DEFAULT_KEY_MAPPING); Bind<CachedContentLoader>().ToSelf().InSingletonScope().WithConstructorArgument("rootDir", "Content"); } } } } NinjectGame.cs /// <summary>Base class for Games making use of Ninject</summary> public class NinjectGame : Game { /// <summary>Initializes a new Ninject game instance</summary> /// <param name="kernel">Kernel the game has been created by</param> public NinjectGame(IKernel kernel) { Type ownType = this.GetType(); if(ownType != typeof(Game)) { kernel.Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); } kernel.Bind<Game>().To<NinjectGame>().InSingletonScope(); } } } // namespace Nuclex.DependencyInjection.Demo.Scaffolding When I try to get the CachedContentLoader, I get a stack overflow exception. I'm basing this off of this tutorial, but I really have no idea what I'm doing. Help?

    Read the article

  • Need help debugging a very basic PHP SOAP Hello world app

    - by WarDoGG
    I have been breaking my head at this, reading almost every article and tutorial there is on the web, but nothing doing.. i still cannot get my first web service application to work. I would really appreciate it if anyone could debug this code for me and provide me with a good explanation as to what is wrong and why. This will help indeed ! Thanks ! I have pasted below the entire codes that i am using making it easier to debug. I'm using the PHP5 SOAP extension. Here is my WSDL: <?xml version="1.0" encoding="utf-8"?> <wsdl:definitions name="testWebservice" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://tempuri.org/" xmlns:s1="http://microsoft.com/wsdl/types/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://tempuri.org/"> <s:import namespace="http://microsoft.com/wsdl/types/" /> <s:element name="getUser"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="username" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="password" type="s:string" /> </s:sequence> </s:complexType> </s:element> <s:element name="getUserResponse"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="getUserResult" type="tns:userInfo" /> </s:sequence> </s:complexType> </s:element> <s:complexType name="userInfo"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="ID" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="authkey" type="s:int" /> </s:sequence> </s:complexType> </s:schema> </wsdl:types> <wsdl:message name="getUserSoapIn"> <wsdl:part name="parameters" element="tns:getUser" /> </wsdl:message> <wsdl:message name="getUserSoapOut"> <wsdl:part name="parameters" element="tns:getUserResponse" /> </wsdl:message> <wsdl:portType name="testWebservice"> <wsdl:operation name="getUser"> <wsdl:input message="tns:getUserSoapIn" /> <wsdl:output message="tns:getUserSoapOut" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="testWebserviceBinding" type="tns:testWebservice"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="getUser"> <soap:operation soapAction="http://tempuri.org/getUser" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="testWebserviceService"> <wsdl:port name="testWebservicePort" binding="tns:testWebserviceBinding"> <soap:address location="http://127.0.0.1/nusoap/storytruck/index.php" /> </wsdl:port> </wsdl:service> </wsdl:definitions> and here is the PHP Code i use to setup the server: <?php function getUser($user,$pass) { return array('ID'=>1); } ini_set("soap.wsdl_cache_enabled", "0"); // disabling WSDL cache $server = new SoapServer("http://127.0.0.1/mywsdl.wsdl"); $server->addFunction('getUser'); $server->handle(); ?> and the code for the client: <?php $client = new SoapClient("http://127.0.0.1/index.php?wsdl", array('exceptions' => 0)); try { $result = $client->getUser("username","pass"); print_r($result); } catch (SoapFault $result) { print_r($result); } ?> Here is the ERROR output i am getting on the browser : SoapFault Object ( [message:protected] => Error cannot find parameter [string:Exception:private] => [code:protected] => 0 [file:protected] => C:\xampp\htdocs\client.php [line:protected] => 6 [trace:Exception:private] => Array ( [0] => Array ( [function] => __call [class] => SoapClient [type] => -> [args] => Array ( [0] => getUser [1] => Array ( [0] => username [1] => pass ) ) ) [1] => Array ( [file] => C:\xampp\htdocs\client.php [line] => 6 [function] => getUser [class] => SoapClient [type] => -> [args] => Array ( [0] => username [1] => pass ) ) ) [previous:Exception:private] => [faultstring] => Error cannot find parameter [faultcode] => SOAP-ENV:Client )

    Read the article

  • Android using ksoap calling PHP SOAP webservice fails: 'procedure 'CheckLogin' not found

    - by AmazingDreams
    I'm trying to call a PHP SOAP webservice. I know my webservice functions correctly because I use it in a WPF project succesfully. I'm also building an app in android, using the same webservice. The WSDL file can be found here: http://www.wegotcha.nl/servicehandler/service.wsdl This is my code in the android app: String SOAP_ACTION = "http://www.wegotcha.nl/servicehandler/CheckLogin"; String NAMESPACE = "http://www.wegotcha.nl/servicehandler"; String METHOD_NAME = "CheckLogin"; String URL = "http://www.wegotcha.nl/servicehandler/servicehandler.php"; String resultData = ""; SoapSerializationEnvelope soapEnv = new SoapSerializationEnvelope(SoapEnvelope.VER11); SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); SoapObject UserCredentials = new SoapObject("Types", "UserCredentials6"); UserCredentials.addProperty("mail", params[0]); UserCredentials.addProperty("password", md5(params[1])); request.addSoapObject(UserCredentials); soapEnv.setOutputSoapObject(request); HttpTransportSE http = new HttpTransportSE(URL); http.setXmlVersionTag("<?xml version=\"1.0\" encoding=\"utf-8\"?>"); http.debug = true; try { http.call(SOAP_ACTION, soapEnv); } catch (IOException e) { e.printStackTrace(); } catch (XmlPullParserException e) { e.printStackTrace(); } SoapObject results = null; results = (SoapObject)soapEnv.bodyOut; if(results != null) resultData = results.getProperty(0).toString(); return resultData; Using fiddler I got the following: Android request: <?xml version="1.0" encoding="utf-8"?> <v:Envelope xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns:d="http://www.w3.org/2001/XMLSchema" xmlns:c="http://schemas.xmlsoap.org/soap/encoding/" xmlns:v="http://schemas.xmlsoap.org/soap/envelope/"><v:Header /> <v:Body> <n0:CheckLogin id="o0" c:root="1" xmlns:n0="http://www.wegotcha.nl/servicehandler"> <n1:UserCredentials6 i:type="n1:UserCredentials6" xmlns:n1="Types"> <mail i:type="d:string">myemail</mail> <password i:type="d:string">myhashedpass</password> </n1:UserCredentials6> </n0:CheckLogin> </v:Body> </v:Envelope> Getting the following response: Procedure 'CheckLogin' not present My request produced by my WPF app looks completely different: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <UserCredentials6 xmlns="Types"> <mail>mymail</mail> <password>mypassword</password> </UserCredentials6> </s:Body> </s:Envelope> After googling my ass off I was not able to solve this problem by myself. It may be possible there are some weird things in my Java code because I changed a lot since. I hope you guys will be able to help me, thanks. EDIT: My webservice is of document/literal encoding style, after doing some research I found I should be able to use SoepEnvelope Instead of SoapSerializationEnvelope Though when I replace this I get an error before the try cache block, causing my app to crash. Error: 11-04 16:23:26.786: E/AndroidRuntime(26447): Caused by: java.lang.ClassCastException: org.ksoap2.serialization.SoapObject cannot be cast to org.kxml2.kdom.Node Which is caused by these lines: request.addSoapObject(UserCredentials); soapEnv.setOutputSoapObject(request); This may be a solution though, how do I go about this? I found nothing about using a SoapEnvelope instead of a SoapSerializationEnvelope except for this awesome tutorial: http://ksoap.objectweb.org/project/mailingLists/ksoap/msg00849.html

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

  • Problem with initializing a type with WinsdorContainer

    - by the_drow
    public ApplicationView(string[] args) { InitializeComponent(); string configFilePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "log4net.config"); FileInfo configFileInfo = new FileInfo(configFilePath); XmlConfigurator.ConfigureAndWatch(configFileInfo); IConfigurationSource configSource = ConfigurationManager.GetSection("ActiveRecord") as IConfigurationSource; Assembly assembly = Assembly.Load("Danel.Nursing.Model"); ActiveRecordStarter.Initialize(assembly, configSource); WindsorContainer windsorContainer = ApplicationUtils.GetWindsorContainer(); windsorContainer.Kernel.AddComponentInstance<ApplicationView>(this); windsorContainer.Kernel.AddComponent(typeof(ApplicationController).Name, typeof(ApplicationController)); controller = windsorContainer.Resolve<ApplicationController>(); // exception is thrown here OnApplicationLoad(args); } The stack trace is this: Castle.MicroKernel.ComponentActivator.ComponentActivatorException was unhandled Message="ComponentActivator: could not instantiate Danel.Nursing.Scheduling.Actions.DataServices.NurseAbsenceDataService" Source="Castle.MicroKernel" StackTrace: at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateInstance(CreationContext context, Object[] arguments, Type[] signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.DefaultKernel.ResolveComponent(IHandler handler, Type service, IDictionary additionalArguments) at Castle.MicroKernel.DefaultKernel.ResolveComponent(IHandler handler, Type service) at Castle.MicroKernel.DefaultKernel.get_Item(Type service) at Castle.Windsor.WindsorContainer.Resolve(Type service) at Castle.Windsor.WindsorContainer.ResolveT at Danel.Nursing.Scheduling.ApplicationView..ctor(String[] args) in E:\Agile\Scheduling\Danel.Nursing.Scheduling\ApplicationView.cs:line 65 at Danel.Nursing.Scheduling.Program.Main(String[] args) in E:\Agile\Scheduling\Danel.Nursing.Scheduling\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.ArgumentNullException Message="Value cannot be null.\r\nParameter name: types" Source="mscorlib" ParamName="types" StackTrace: at System.Type.GetConstructor(BindingFlags bindingAttr, Binder binder, Type[] types, ParameterModifier[] modifiers) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.FastCreateInstance(Type implType, Object[] arguments, Type[] signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateInstance(CreationContext context, Object[] arguments, Type[] signature) InnerException: It actually says that the type that I'm trying to initialize does not exist, I think. This is the concreate type that it complains about: namespace Danel.Nursing.Scheduling.Actions.DataServices { using System; using Helpers; using Rhino.Commons; using Danel.Nursing.Model; using NHibernate.Expressions; using System.Collections.Generic; using DateUtil = Danel.Nursing.Scheduling.Actions.Helpers.DateUtil; using Danel.Nursing.Scheduling.Actions.DataServices.Interfaces; public class NurseAbsenceDataService : AbstractDataService<NurseAbsence>, INurseAbsenceDataService { NurseAbsenceDataService(IRepository<NurseAbsence> repository) : base(repository) { } //... } } The AbstractDataService only holds the IRepository for now. Anyone got an idea why the exception is thrown?

    Read the article

  • Adding functionality to any TextReader

    - by strager
    I have a Location class which represents a location somewhere in a stream. (The class isn't coupled to any specific stream.) The location information will be used to match tokens to location in the input in my parser, to allow for nicer error reporting to the user. I want to add location tracking to a TextReader instance. This way, while reading tokens, I can grab the location (which is updated by the TextReader as data is read) and give it to the token during the tokenization process. I am looking for a good approach on accomplishing this goal. I have come up with several designs. Manual location tracking Every time I need to read from the TextReader, I call AdvanceString on the Location object of the tokenizer with the data read. Advantages Very simple. No class bloat. No need to rewrite the TextReader methods. Disadvantages Couples location tracking logic to tokenization process. Easy to forget to track something (though unit testing helps with this). Bloats existing code. Plain TextReader wrapper Create a LocatedTextReaderWrapper class which surrounds each method call, tracking a Location property. Example: public class LocatedTextReaderWrapper : TextReader { private TextReader source; public Location Location { get; set; } public LocatedTextReaderWrapper(TextReader source) : this(source, new Location()) { } public LocatedTextReaderWrapper(TextReader source, Location location) { this.Location = location; this.source = source; } public override int Read(char[] buffer, int index, int count) { int ret = this.source.Read(buffer, index, count); if(ret >= 0) { this.location.AdvanceString(string.Concat(buffer.Skip(index).Take(count))); } return ret; } // etc. } Advantages Tokenization doesn't know about Location tracking. Disadvantages User needs to create and dispose a LocatedTextReaderWrapper instance, in addition to their TextReader instance. Doesn't allow different types of tracking or different location trackers to be added without layers of wrappers. Event-based TextReader wrapper Like LocatedTextReaderWrapper, but decouples it from the Location object raising an event whenever data is read. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. Can have multiple, independent Location objects (or other methods of tracking) tracking at once. Disadvantages Requires boilerplate code to enable location tracking. User needs to create and dispose the wrapper instance, in addition to their TextReader instance. Aspect-orientated approach Use AOP to perform like the event-based wrapper approach. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. No need to rewrite the TextReader methods. Disadvantages Requires external dependencies, which I want to avoid. I am looking for the best approach in my situation. I would like to: Not bloat the tokenizer methods with location tracking. Not require heavy initialization in user code. Not have any/much boilerplate/duplicated code. (Perhaps) not couple the TextReader with the Location class. Any insight into this problem and possible solutions or adjustments are welcome. Thanks! (For those who want a specific question: What is the best way to wrap the functionality of a TextReader?) I have implemented the "Plain TextReader wrapper" and "Event-based TextReader wrapper" approaches and am displeased with both, for reasons mentioned in their disadvantages.

    Read the article

  • Delphi - Why is my global variable "inacessible" when i debug

    - by Antoine Lpy
    I'm building an application that contains around 30 Forms. I need to manage sessions, so I would like to have a global LoggedInUser variable accessible from all forms. I read "David Heffernan"'s post about global variables, and how to avoid them but I thought it would be easier to have a global User variable rather than 30 forms having their own User variable. So i have a unit : GlobalVars unit GlobalVars; interface uses User; // I defined my TUser class in a unit called User var LoggedInUser: TUser; implementation initialization LoggedInUser:= TUser.Create; finalization LoggedInUser.Free; end. Then in my LoginForm's LoginBtnClick procedure I do : unit FormLogin; interface uses [...],User; type TForm1 = class(TForm) [...] procedure LoginBtnClick(Sender: TObject); private { Déclarations privées } public end; var Form1: TForm1; AureliusConnection : IDBConnection; implementation {$R *.fmx} uses [...]GlobalVars; procedure TForm1.LoginBtnClick(Sender: TObject); var Manager : TObjectManager; MyCriteria: TCriteria<TUser>; u : TUser; begin Manager := TObjectManageR.Create(AureliusConnection); MyCriteria :=Manager.Find<TUtilisateur> .Add(TExpression.Eq('login',LoginEdit.Text)) .Add(TExpression.Eq('password',PasswordEdit.Text)); u := MyCriteria.UniqueResult; if u = nil then MessageDlg('Login ou mot de passe incorrect',TMsgDlgType.mtError,[TMsgDlgBtn.mbOK],0) else begin LoggedInUser:=u; //Here I assign my local User data to my global User variable Form1.Destroy; A00Form.visible:=true; end; Manager.Free; end; Then in another form I would like to access this LoggedInUser object in the Menu1BtnClick procedure : Unit C01_Deviations; interface uses System.SysUtils, System.Types, System.UITypes, System.Classes, System.Variants, FMX.Types, FMX.Graphics, FMX.Controls, FMX.Forms, FMX.Dialogs, FMX.StdCtrls, FMX.ListView.Types, FMX.ListView, FMX.Objects, FMX.Layouts, FMX.Edit, FMX.Ani; type TC01Form = class(TForm) [...] Menu1Btn: TButton; [...] procedure Menu1BtnClick(Sender: TObject); private { Déclarations privées } public { Déclarations publiques } end; var C01Form: TC01Form; implementation uses [...]User,GlobalVars; {$R *.fmx} procedure TC01Form.Menu1BtnClick(Sender: TObject); var Assoc : TUtilisateur_FonctionManagement; ValidationOK : Boolean; util : TUser; begin ValidationOK := False; util := GlobalVars.LoggedInUser; // Here i created a local user variable for debug purposes as I thought it would permit me to see the user data. But i get "Inaccessible Value" as its value util.Nom:='test'; for Assoc in util.FonctionManagement do // Here is were my initial " access violation" error occurs begin if Assoc.FonctionManagement.Libelle = 'Reponsable équipe HACCP' then begin ValidationOK := True; break; end; end; [...] end; When I debug I see "Inaccessible Value" in the value column of my user. Do you have any idea why ? I tried to put an integer in this GlobalVar unit, and i was able to set its value from my login form and read it from my other form.. I guess I could store the user's id, which is an integer, and then retrieve the user from the database using its id. But it seems really unefficient.

    Read the article

  • Fluent NHibermate and Polymorphism and a Newbie!

    - by Andy Baker
    I'm a fluent nhibernate newbie and I'm struggling mapping a hierarchy of polymorhophic objects. I've produced the following Model that recreates the essence of what I'm doing in my real application. I have a ProductList and several specialised type of products; public class MyProductList { public virtual int Id { get; set; } public virtual string Name {get;set;} public virtual IList<Product> Products { get; set; } public MyProductList() { Products = new List<Product>(); } } public class Product { public virtual int Id { get; set; } public virtual string ProductDescription {get;set;} } public class SizedProduct : Product { public virtual decimal Size {get;set;} } public class BundleProduct : Product { public virtual Product BundleItem1 {get;set;} public virtual Product BundleItem2 {get;set;} } Note that I have a specialised type of Product called BundleProduct that has two products attached. I can add any of the specialised types of product to MyProductList and a bundle Product can be made up of any of the specialised types of product too. Here is the fluent nhibernate mapping that I'm using; public class MyListMap : ClassMap<MyList> { public MyListMap() { Id(ml => ml.Id); Map(ml => ml.Name); HasManyToMany(ml => ml.Products).Cascade.All(); } } public class ProductMap : ClassMap<Product> { public ProductMap() { Id(prod => prod.Id); Map(prod => prod.ProductDescription); } } public class SizedProductMap : SubclassMap<SizedProduct> { public SizedProductMap() { Map(sp => sp.Size); } } public class BundleProductMap : SubclassMap<BundleProduct> { public BundleProductMap() { References(bp => bp.BundleItem1).Cascade.All(); References(bp => bp.BundleItem2).Cascade.All(); } } I haven't configured have any reverse mappings, so a product doesn't know which Lists it belongs to or which bundles it is part of. Next I add some products to my list; MyList ml = new MyList() { Name = "Example" }; ml.Products.Add(new Product() { ProductDescription = "PSU" }); ml.Products.Add(new SizedProduct() { ProductDescription = "Extension Cable", Size = 2.0M }); ml.Products.Add(new BundleProduct() { ProductDescription = "Fan & Cable", BundleItem1 = new Product() { ProductDescription = "Fan Power Cable" }, BundleItem2 = new SizedProduct() { ProductDescription = "80mm Fan", Size = 80M } }); When I persist my list to the database and reload it, the list itself contains the items I expect ie MyList[0] has a type of Product, MyList[1] has a type of SizedProduct, and MyList[2] has a type of BundleProduct - great! If I navigate to the BundleProduct, I'm not able to see the types of Product attached to the BundleItem1 or BundleItem2 instead they are always proxies to the Product - in this example BundleItem2 should be a SizedProduct. Is there anything I can do to resove this either in my model or the mapping? Thanks in advance for your help.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >