Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 207/715 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Why isn't WH_MOUSE hook global anymore?

    - by Valentin Galea
    I have this global mouse hook setup in a DLL that watches for mouse gestures. Everything works perfectly but with a hook set for WH_MOUSE_LL which is a low-level hook and one that doesn't need to be in an external injectable DLL. Once I switch - to the more suitable one would say - WH_MOUSE mouse hook, everything falls apart. Once I click outside my main application (the one that installs the hook), the hook gets corrupted - ::UnhookWindowsHookEx will fail. I only found this guy saying at experts exchange: "No way, at least under Windows XP + SVP2 WH_MOUSE won't go global, you must use WH_MOUSE_LL instead." I setup the hooks correctly: in a DLL using a shared data section, posting and not sending messages from the hook proceduce. Why has this changed? And why is not documented? Anyone encountered this? Thanks! BTW: I've reverse engineered a bit the popular StrokeIt application and it uses a combination of WH_GETMESSAGE and WH_MOUSE hooks and still works on XP/Vista...

    Read the article

  • How to force client to switch RTP transport from UDP to TCP?

    - by Cipi
    If the client wants to watch a stream that is on my RTSP server, it first tries to setup a stream through the UDP protocol. How can I tell it that my server only supports RTP/AVP/TCP and that it should switch transports? I want to terminate the UDP support on my server, but all the clients first try to SETUP the session over UDP, and later they do so over TCP... and I want to switch them to TCP as soon as possible in RTSP protocol. How can I do that?

    Read the article

  • Connecting two Windows XP with MSMQ

    - by NealWalters
    This question is a cross between a developer and a server setup question. I asked on Serverfault but no answer yet. As a developer, I need to setup a test to see how MSMQ works between two machines, and I'm unclear what to do. I will use C# or BizTalk to do the read/write to/from the queues. I have MSMQ installed on two Windows XP computers. Can I configure them to pass messages back and forth, or do I need an MSMQ server in the middle? If I need an MSMQ server, does the normal MSMQ with Win2003 able to act as that? And then, how do I connect my Windows XP to that Windows 2003 server? Is it a) On screen admin dialog in the MSMQ plug-in to MMC, b) a config file, c) Active Directory, d) something else? Thanks, Neal Walters

    Read the article

  • Spring/Hibernate/Junit example of testing DAO against HSQLDB

    - by Ryan P.
    Hi guys, I'm working on trying to implement a JUnit test to check the functionality of a DAO. (The DAO will create/read a basic object/table relationship in HSQLDB). The trouble I'm having is the persistence of the DAO (for the non-test code) is being completed through an in-house solution using Spring/Hibernate, which eliminates the usual *.hbm.xml templates that most examples I have found contain. Beacuse of this, I'm having some trouble understanding how to setup a JUnit test to implement the DAO to create/read (just very basic funtionality) to an in-memory HSQLDB. I have found a few examples, but the usage of the in-house persistence means I can't extend some of the classes the examples show (I can't seem to get the application-context.xml setup properly). Can anyone suggest any projects/examples I could take a look at (or any documentation) to further my understanding of the best way to implement this test functionality? I feel like this should be really simple, but I keep running into problems implementing the examples I have found. Thanks in advance!

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

  • Drupal join on taxonomy terms

    - by ciscoheat
    I have a Drupal setup like this: Content type: Apartments Vocabulary: Areas, that can be used with Apartments. Content type: User profile, with a Content Taxonomy Field for Areas so users can select what areas they are interested in. I would like a view that shows all the user profiles that matches the apartments in their area. A "User profile <- Areas <- Apartments" join in other words. I've been mucking around with the views interface for a while but it's not clear to me how the relations can be setup to achieve this. Can someone give me a hint? In case this cannot be easily solved with views, what is a good way of doing it otherwise? Thanks for your help.

    Read the article

  • Rails 3: config/initializers errors for gem configuration

    - by neezer
    I'm trying to setup this plugin (Crumble), and the docs say I need to add a configuration file for the plugin in config/initializers/ like this (breadcrumb.rb): Breadcrumb.configure do ... end I add in my directives in that block, and reloaded the page, and I'm immediately greeted with a Passenger error: uninitialized constant Breadcrumb What am I missing here? gem list shows Crumble as installed, and if I launch IRB I can require 'crumble' successfully. I remember doing this just fine in Rails 2.3.5. Here's my setup: rails 3.0.0.beta3 ruby 1.9.1p378 (via RVM) passenger 2.2.11 (with Apache2) crumble 0.1.2 I've been trying to read the Rails 3 release notes to see if they've changed anything that would affect this, but so far I haven't found anything to suggest that the above shouldn't work. I'd appreciate any guidance you could spare me!

    Read the article

  • Paging doesn't work in the Joomla Article Manager in the admin section

    - by SkippyFire
    I inherited a Joomla site that is having a problem with the article manager in the admin section. The pagination doesn't work! If I click the page number, forward, back, or page size, nothing happens! So I found out that someone had previously installed the iJoomla SEO plugin, but it never worked so they removed it. I think it is incompatible with the version I have. I setup a local environment with almost the same setup (I have 5.2.11 vs the servers 5.2.13) with Wamp Server, and I found that some of the session variables are missing! When dumped via print_r(), the $_SESSION variable is missing the "com_content", "global", and "com_plugins" arrays! So I guess that is the reason that paging doesn't work, because the "com_content" array looks like it has paging info in it. (maybe I'm wrong) So I'm running Version 1.5.13 on PHP Version 5.2.13 Anyone know why this would happen? Thanks in advance!

    Read the article

  • How do i use RVM w/ Hudson CI server on Debian?

    - by JoshReedSchramm
    I'm trying to setup an automated "build" server for my rails projects using Hudson CI. SO far it's able to run specs and do metrics on the code but I have 2 different projects dependent on 2 different versions of ruby. So i'm trying to use RVM to run multiple copies of ruby then switch back and forth in a pre-build step. I found a couple posts like this one that try and explain how to make this work, but I'm not running a startup script for hudson, it starts on boot which is how it worked out of the box when i installed it via the debian instructions. The problem seems to be that even though hudson runs under the "hudson" account and that account has rvm installed (and working) when it tries to run a shell based prebuild step to call rvm switch 1.8.7 it fails with the error "rvm: command not found" Not sure what I'm doing wrong. Hudson is using SH as its shell but i also tried using bash. no luck. Has anyone gotten this working before in this setup?

    Read the article

  • How to dispose of a NET COM interop object on Release()

    - by mhenry1384
    I have a COM object written in managed code (C++/CLI). I am using that object in standard C++. How do I force my COM object's destructor to be called immediately when the COM object is released? If that's not possible, call I have Release() call a MyDispose() method on my COM object? My code to declare the object (C++/CLI): [Guid("57ED5388-blahblah")] [InterfaceType(ComInterfaceType::InterfaceIsIDispatch)] [ComVisible(true)] public interface class IFoo { void Doit(); }; [Guid("417E5293-blahblah")] [ClassInterface(ClassInterfaceType::None)] [ComVisible(true)] public ref class Foo : IFoo { public: void MyDispose(); ~Foo() {MyDispose();} // This is never called !Foo() {MyDispose();} // This is called by the garbage collector. virtual ULONG Release() {MyDispose();} // This is never called virtual void Doit(); }; My code to use the object (native C++): #import "..\\Debug\\Foo.tlb" ... Bar::IFoo setup(__uuidof(Bar::Foo)); // This object comes from the .tlb. setup.Doit(); setup-Release(); // explicit release, not really necessary since Bar::IFoo's destructor will call Release(). If I put a destructor method on my COM object, it is never called. If I put a finalizer method, it is called when the garbage collector gets around to it. If I explicitly call my Release() override it is never called. I would really like it so that when my native Bar::IFoo object goes out of scope it automatically calls my .NET object's dispose code. I would think I could do it by overriding the Release(), and if the object count = 0 then call MyDispose(). But apparently I'm not overriding Release() correctly because my Release() method is never called. Obviously, I can make this happen by putting my MyDispose() method in the interface and requiring the people using my object to call MyDispose() before Release(), but it would be slicker if Release() just cleaned up the object. Is it possible to force the .NET COM object's destructor, or some other method, to be called immediately when a COM object is released? Googling on this issue gets me a lot of hits telling me to call System.Runtime.InteropServices.Marshal.ReleaseComObject(), but of course, that's how you tell .NET to release a COM object. I want COM Release() to Dispose of a .NET object.

    Read the article

  • I want to version control my entire slice

    - by Tom
    I'm renting a slice (i.e., a VPS) from Slicehost. I've a spent a day or two filling up /usr with my favorite packages, /etc with configs and init scripts, and so on. Now I want to: save this whole setup somewhere (e.g., to load onto another machine). see what changes I've made to which files revert changes, tag revisions, and all that other good version control stuff Saving a disk image gives me (1), but not (2) and (3). Using Subversion (svn import / svn://someotherhost) might give me all three, but I expect problems if I actually try to check a project out into / and maintain .svn directories in root-owned areas. And to load my setup onto a fresh slice, I'd need to install an svn client on it first. Is there a good way to do what I want to do?

    Read the article

  • Packaging LAME.exe with a C# project

    - by user347821
    Please forgive my noob-ness on this, but how do I package LAME.exe w/ a C# setup project? Currently, I have LAME being called like: //use stringbuilder to create arguments var psinfo = new ProcessStartInfo( @"lame.exe") { Arguments = sb.ToString(), WorkingDirectory = Application.StartupPath, WindowStyle = ProcessWindowStyle.Hidden }; var p = Process.Start( psinfo ); p.WaitForExit(); This works in debug and release modes on the development machine, but when I create a setup project for this, it never creates the MP3. LAME and the compiled code reside in the same directory when installed. Help!

    Read the article

  • Apache - Reverse Proxy and HTTP 302 status messsage

    - by Rob
    My team is trying to setup an Apache reverse proxy from a customer's site into one of our web applications. http://www.example.com/app1/some-path maps to http://internal1.example.com/some-path Inside our application we use struts and have redirect = true set on certain actions in order to provide certain functionality. The 302 status messages from these re-directs cause the user to break out of the proxy resulting in an error page for the end user. HTTP/1.1 302 Found Location: http://internal.example.com/some-path/redirect Is there any way to setup the reverse proxy in apache so that the redirects work correctly? http://www.example.com/app1/some-path/redirect

    Read the article

  • How do i call bash script function using exec function by passing parameter in php?

    - by Stan
    I have created a bash script that install magento in a cpanel. but i have a problem regarding the exec function. $function_path = Mage::getBaseDir()."/media/installer/function.sh"; exec("$function_path $db_host $db_name $db_user $db_pass $url $ad_user $ad_pass $ad_email"); This the bash shell script function.sh #!/bin/bash magento_detail $dbhost $dbname $dbuser $dbpass $url $admin_username $admin_password $admin_email function magento_detail() { stty erase '^?' echo "To install Magento, you will need a blank database ready with a user assigned to it." echo -n "Do you have all of your database information" dbinfo = "y" echo $dbinfo if [ "$dbinfo" -eq 'y' ] then echo "Database Host (usually localhost) : $dbhost " echo "Database Name : $dbname " echo "Database User : $dbuser " echo "Database Password : $dbpass " echo "Store Url : $url " echo "Admin Username : $admin_username " echo "Admin Password : $admin_password " echo "Admin Email Address : $admin_email " echo -n "Include Sample Data? (y/n) " echo sample = "y" if [ "$sample" -eq "y" ]; then echo echo "Now installing Magento with sample data..." echo echo "Downloading packages..." echo wget http://www.magentocommerce.com/downloads/assets/1.5.1.0/magento-1.5.1.0.tar.gz wget http://www.magentocommerce.com/downloads/assets/1.2.0/magento-sample-data-1.2.0.tar.gz echo echo "Extracting data..." echo tar -zxvf magento-1.5.1.0.tar.gz tar -zxvf magento-sample-data-1.2.0.tar.gz echo echo "Moving files..." echo mv magento-sample-data-1.2.0/media/* magento/media/ mv magento-sample-data-1.2.0/magento_sample_data_for_1.2.0.sql magento/data.sql mv magento/index.php magento/.htaccess ./$test1 echo echo "Setting permissions..." echo chmod o+w var var/.htaccess app/etc chmod -R o+w media echo echo "Importing sample products..." echo mysql -h $dbhost -u $dbuser -p$dbpass $dbname < data.sql echo echo "Initializing PEAR registry..." echo chmod 550 mage ./mage mage-setup . echo echo "Downloading packages..." echo echo echo "Cleaning up files..." echo rm -rf downloader/pearlib/cache/* downloader/pearlib/download/* rm -rf magento/ magento-sample-data-1.2.0/ rm -rf magento-1.5.1.0.tar.gz magento-sample-data-1.2.0.tar.gz data.sql rm -rf index.php.sample .htaccess.sample php.ini.sample LICENSE.txt STATUS.txt data.sql echo echo "Installing Magento..." echo php -f install.php --license_agreement_accepted "yes" --locale "en_US" --timezone "America/Los_Angeles" --default_currency "USD" --db_host "$dbhost" --db_name "$dbname" --db_user "$dbuser" --db_pass "$dbpass" --url "$url" --use_rewrites "yes" --use_secure "no" --secure_base_url "" --use_secure_admin "no" --admin_email "$admin_email" --admin_username "$admin_username" --admin_password "$admin_password" echo echo "Finished installing Magento" echo exit else echo "Now installing Magento without sample data..." echo echo "Downloading packages..." echo wget http://www.magentocommerce.com/downloads/assets/1.5.1.0/magento-1.5.1.0.tar.gz echo echo "Extracting data..." echo tar -zxvf magento-1.5.1.0.tar.gz echo echo "Moving files..." echo mv magento/* magento/.htaccess . echo echo "Setting permissions..." echo chmod o+w var var/.htaccess app/etc chmod -R o+w media echo echo "Initializing PEAR registry..." echo chmod 550 mage ./mage mage-setup . echo echo "Downloading packages..." echo echo echo "Cleaning up files..." echo rm -rf downloader/pearlib/cache/* downloader/pearlib/download/* rm -rf magento/ magento-1.5.1.0.tar.gz rm -rf index.php.sample .htaccess.sample php.ini.sample LICENSE.txt STATUS.txt echo echo "Installing Magento..." echo php -f install.php --license_agreement_accepted "yes" --locale "en_US" --timezone "America/Los_Angeles" --default_currency "USD" --db_host "$dbhost" --db_name "$dbname" --db_user "$dbuser" --db_pass "$dbpass" --url "$url" --use_rewrites "yes" --use_secure "no" --secure_base_url "" --use_secure_admin "no" --admin_email "$admin_email" --admin_username "$admin_username" --admin_password "$admin_password" echo echo "Finished installing Magento else part" exit fi else echo "Please setup a database first. Don't forget to assign a database user!" exit fi }` when i run this exec command,at that time it calls bash script function magento_installer() which contains arguments $db_host $db_name $db_user $db_pass $url $ad_user $ad_pass $ad_email. above arguments i'll pass in exec command to call magento_installer() function of bash script. so, is it right way of calling a bash script function? It directly goes to the last step of if condition and prints "Please setup a database first. Don't forget to assign a database user!". It cant enter it in if condition and directly goes to else condition. so please help me?

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

  • SpineJS and RequireJS

    - by ot3m3m
    I am using require.js 2.0. I have the following code that is called initially by require: app.js requirejs.config({ paths: { app: '..', spine: 'spine_src/spine', cs: "cs", }, shim: { 'spine_src/route': { deps: ['spine'] } } }); // Start the main app logic. requirejs(['cs!app/StartApp_Admin', 'spine','spine_src/route'], function (StartApp_Admin, Spine, Route) { { new StartApp_Admin(); } }); And the following controller: StartApp_Admin define () -> class StartApp_Admin extends Spine.Controller constructor: -> @routes "/twitter/:id": (params) -> console.log("/users/", params.id) Spine.Route.setup() When I execute the application I get the following error in my spine route library file (route.js). The application breaks when I call Spine.Route.setup() Uncaught TypeError: Object function (element) {return element;} has no method 'extend' Any help would be greatly appreciated

    Read the article

  • Unit Testing functions within repository interfaces - ASP.net MVC3 & Moq

    - by RawryLions
    I'm getting into writing unit testing and have implemented a nice repository pattern/moq to allow me to test my functions without using "real" data. So far so good.. However.. In my repository interface for "Posts" IPostRepository I have a function: Post getPostByID(int id); I want to be able to test this from my Test class but cannot work out how. So far I am using this pattern for my tests: [SetUp] public void Setup() { mock = new Mock<IPostRepository>(); } [Test] public void someTest() { populate(10); //This populates the mock with 10 fake entries //do test here } In my function "someTest" I want to be able to call/test the function GetPostById. I can find the function with mock.object.getpostbyid but the "object" is null. Any help would be appreciated :) iPostRepository: public interface IPostRepository { IQueryable<Post> Posts {get;} void SavePost(Post post); Post getPostByID(int id); }

    Read the article

  • Setting up phpstorm 4 with XAMPP on Windows 7

    - by stirredo
    I need help setting up phpstorm (4.0) with XAMPP which is at the default location c:\htdocs. I create all my projects at c:\htdocs\projectname. Here are the things I have already done: Setup the PHP interpretator by going to settings and then PHP (main menu) Go to PHP Servers and then To setup run configuration I have the done the following: Now when I run using the above configuration it just takes me to localhost in my browser, instead of the file I am editing (http://localhost instead of localhost/myproject/myfile.php). What am I missing? I read the help topics of phpstorm and googled a lot but still couldn't figure it out.

    Read the article

  • Zend_Test: No default module defined for this application

    - by jiewmeng
    UPDATE 23 Dec I had the problem where Zend Framework complains about "No default module defined for this application". I didn't use Modules and the main app works fine. I finally solved the problem with the help from weierophinney.net Your bootstrap needs to minimally set the controller directory -- do a call to $this->frontController->addControllerDirectory(...) in your appBootstrap() method. I didn't in my example, as my Initialization plugin does that sort of thing for me. The problem is solved by adding the below to setUp() $this->getFrontController()->setControllerDirectory(APPLICATION_PATH . '/controllers'); But now, I have afew other questions: 1. Why does that value not get initialized by application.ini? In application.ini, I have [production] resources.frontController.controllerDirectory = APPLICATION_PATH "/controllers" [testing : production] // didn't change anything regarding modules nor controllers 2. I tried setting the controllerDirectory in bootstrap.php of my unit test, but it does not work $front = Zend_Controller_Front::getInstance(); $front->setControllerDirectory(APPLICATION_PATH . '/controllers'); The only way that works is using setUp(). Why is that? END UPDATE 23 Dec I am getting the above error when unit testing my controller plugins. I am not using any modules. in my bootstrap.php for unit testing, I even tried adding $front = Zend_Controller_Front::getInstance(); $front->setDefaultModule('default'); But it still does not work. Anyways my bootstrap.php looks like this UPDATE: the error looks something like There were 2 errors: 1) Application_Controller_Plugin_AclTest::testAccessToUnauthorizedPageRedirectsToLogin Zend_Controller_Exception: No default module defined for this application D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Dispatcher\Standard.php:391 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Dispatcher\Standard.php:204 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Dispatcher\Standard.php:244 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Front.php:954 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Test\PHPUnit\ControllerTestCase.php:205 D:\Projects\Tickle\tests\application\controllers\plugins\aclTest.php:6 2) Application_Controller_Plugin_AclTest::testAccessToAllowedPageWorks Zend_Controller_Exception: No default module defined for this application D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Dispatcher\Standard.php:391 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Dispatcher\Standard.php:204 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Dispatcher\Standard.php:244 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Controller\Front.php:954 D:\ResourceLibrary\Frameworks\PHPFrameworks\Zend\Test\PHPUnit\ControllerTestCase.php:205 D:\Projects\Tickle\tests\application\controllers\plugins\aclTest.php:16 UPDATE I tried adding public function setUp() { $front = Zend_Controller_Front::getInstance(); $front->setDefaultModule('default'); } then 1 part works. public function testAccessToUnauthorizedPageRedirectsToLogin() { // this fails with exception "Zend_Controller_Exception: No default module defined for this application" $this->dispatch('/projects'); $this->assertController('auth'); $this->assertAction('login'); } public function testAccessToAllowedPageWorks() { // this passes $auth = Zend_Auth::getInstance(); $authAdapter = new Application_Auth_Adapter('jiewmeng', 'password'); $auth->authenticate($authAdapter); $this->dispatch('/projects'); $this->assertController('projects'); $this->assertAction('index'); }

    Read the article

  • Git fatal: remote end hung up

    - by Bill
    So I thought I had finally got everything setup on Windows ... then ran into this issue. Current setup URL: ssh://user@host:port/myapp.git Already run Putty - and can connect using valid .ppk keys through the ~/.ssh/authorized_keys direct. In Git and TortoiseGIT - I set both to use "plink.exe". Putty works fine - no issues - but when I run that URL into bash I get for a git clone (url) fatal: the remote end hung up expectedly In a cygwin bash terminal - running "ssh user@host" - works no probs at all. Anyone suggest anything?

    Read the article

  • ESS/AucTeX/Sweave integration

    - by aL3xa
    I'm using GNU/Linux distro (Arch, if that's relevant), Emacs v23.2.1, ESS v5.9 and AucTeX v11.86. I want to setup AucTeX to recognize .Rnw files, so I can run LaTeX on .Rnw files with C-c C-c and get .dvi file automatically. I reckon it's quite manageable by editing .emacs file, but I still haven't got a firm grasp on Elisp. Yet another problem is quite annoying - somehow, LaTeX is not recognizing \usepackage{Sweave} in preambule, so I actually need to copy Sweave.sty file (in my case located in /usr/share/R/texmf/Sweave.sty) to directory where .Rnw file is located (and I'm getting more frustrated with the fact that this is common bug on Windows platforms!) My question boils down to two problems: how to make LaTeX recognize \usepackage{Sweave} (without copying Sweave.sty to "home" folder each time) how to setup AucTeX to compile .Rnw files to .dvi

    Read the article

  • Exposing headers on iPhone static library

    - by leolobato
    Hello guys, I've followed this tutorial for setting up a static library with common classes from 3 projects we are working on. It's pretty simple, create a new static library project on xcode, add the code there, a change some headers role from project to public. The tutorial says I should add my library folder to the header search paths recursively. Is this the right way to go? I mean, on my library project, I have files separated in folders like Global/, InfoScreen/, Additions/. I was trying to setup one LOKit.h file on the root folder, and inside that file #import everything I need to expose. So on my host project I don't need to add the folder recursively to the header search path, and would just #import "LOKit.h". But I couldn't get this to work, the host project won't build complaining about all the classes I didn't add to LOKit.h, even though the library project builds. So, my question is, what is the right way of exposing header files when I setup a Cocoa Touch Static Library project on xCode?

    Read the article

  • Using GLOrtho to view Side, Front, Top perspectives of a 3D scene

    - by talldan
    Dear all, I'm building a game level editing app as part of a university project. In my application I have multiple viewports, a Perspective viewport and three orthographic views all setup to view the same scene. I've successfuly setup the orthographic views and can translate and scale them to mimic scrolling and zooming. Unfortunately, I'm having one problem - my scene still contains 3 dimensions, so objects viewed in orthographic mode of certain depths are clipped when they fall outside of my clipping volume. Most 3D authoring tools or level editors allow you to view all objects in orthographic mode regardless of depth. I guess what I need to do is scale my scene in the appropriate dimension so that all values lie between 1 and -1, is there a straightforward way of going about this? Or is there a different better approach. Thanks very much for your help, Dan

    Read the article

  • How to mock test a web service in PHPUnit across multiple tests?

    - by scraton
    I am attempting to test a web service interface class using PHPUnit. Basically, this class makes calls to a SoapClient object. I am attempting to test this class in PHPUnit using the "getMockFromWsdl" method described here: http://www.phpunit.de/manual/current/en/test-doubles.html#test-doubles.stubbing-and-mocking-web-services However, since I want to test multiple methods from this same class, every time I setup the object, I also have to setup the mock WSDL SoapClient object. This is causing a fatal error to be thrown: Fatal error: Cannot redeclare class xxxx in C:\web\php5\PEAR\PHPUnit\Framework\TestCase.php(1227) : eval()'d code on line 15 How can I use the same mock object across multiple tests without having to regenerate it off the WSDL each time? That seems to be the problem.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >