Search Results

Search found 11993 results on 480 pages for 'define syntax'.

Page 472/480 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • UITableViewController and viewForHeaderInSection problems

    - by Fiona
    Hello, So I need your help please! I've created a UITableViewController: ContactDetailViewController. In IB in the nib file, I've added a view ahead of the table view and hooked it up to headerView - a UIView declared in the .h file. I've also created a view: CustomerHeaderView However when I run the code below, Its throwing an exception at the following line: headerView = [[UIView alloc] initWithNibName:@"ContactHeaderDetail" bundle:nil]; The error being thrown is: 2010-05-20 10:59:50.405 OnePageCRM[19620:20b] * -[UIView initWithNibName:bundle:]: unrecognized selector sent to instance 0x3ca4fa0 2010-05-20 10:59:50.406 OnePageCRM[19620:20b] Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '** -[UIView initWithNibName:bundle:]: unrecognized selector sent to instance 0x3ca4fa0' So any ideas anyone? Many thanks, Fiona // // ContactDetailViewController.m // OnePageCRM // // Created by Fiona Tighe on 19/05/2010. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "ContactDetailViewController.h" #import "DisplayInfoViewController.h" #import "ActionViewController.h" #define SectionHeaderHeigth 200 @implementation ContactDetailViewController @synthesize name; @synthesize date; @synthesize headerView; @synthesize nextAction; @synthesize nameLabel; @synthesize usernameLabel; @synthesize nextActionTextField; @synthesize dateLabel; @synthesize notesTableView; @synthesize contactInfoButton; @synthesize backgroundInfoButton; @synthesize actionDoneButton; - (void)viewDidLoad { [super viewDidLoad]; // Uncomment the following line to display an Edit button in the navigation bar for this view controller. // self.navigationItem.rightBarButtonItem = self.editButtonItem; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } #pragma mark Table view methods - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 2; } // Customize the number of rows in the table view. - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { int numOfRows; NSLog(@"section: %d", section); switch (section){ case 0: numOfRows = 0; break; case 1: numOfRows = 3; break; default: break; } return numOfRows; } - (UIView *) tableView:(UITableView *)tableView viewForHeaderInSection:(NSInteger)section { if (section == 0){ headerView = [[UIView alloc] initWithNibName:@"ContactHeaderDetail" bundle:nil]; // headerView = [[UIView alloc] initWithNibName:@"ContactHeaderDetail" bundle:nil]; return headerView; }else{ return nil; } } - (CGFloat)tableView:(UITableView *)tableView heightForHeaderInSection:(NSInteger)section{ return SectionHeaderHeigth; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Set up the cell... return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Navigation logic may go here. Create and push another view controller. // AnotherViewController *anotherViewController = [[AnotherViewController alloc] initWithNibName:@"AnotherView" bundle:nil]; // [self.navigationController pushViewController:anotherViewController]; // [anotherViewController release]; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:YES]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ -(IBAction)displayContactInfo:(id)sender{ DisplayInfoViewController *divc = [[DisplayInfoViewController alloc] init]; divc.textView = self.nextAction; divc.title = @"Contact Info"; [self.navigationController pushViewController:divc animated:YES]; [divc release]; } -(IBAction)displayBackgroundInfo:(id)sender{ DisplayInfoViewController *divc = [[DisplayInfoViewController alloc] init]; divc.textView = self.nextAction; divc.title = @"Background Info"; [self.navigationController pushViewController:divc animated:YES]; [divc release]; } -(IBAction)actionDone:(id)sender{ ActionViewController *avc = [[ActionViewController alloc] init]; avc.title = @"Action"; avc.nextAction = self.nextAction; [self.navigationController pushViewController:avc animated:YES]; [avc release]; } - (void)dealloc { [name release]; [date release]; [nextAction release]; [nameLabel release]; [usernameLabel release]; [nextActionTextField release]; [dateLabel release]; [notesTableView release]; [contactInfoButton release]; [backgroundInfoButton release]; [actionDoneButton release]; [headerView release]; [super dealloc]; } @end

    Read the article

  • wss4j: - Cannot find key for alias: monit

    - by feiroox
    Hi I'm using axis1.4 and wss4j. When I define in client-config.wsdd for WSDoAllSender and WSDoAllReceiver different signaturePropFiles where I have different key stores defined with different certificates, I'm able to have different certificates for sending and receiving. But when I use the same signaturePropFiles' with the same keystore. I get this message when I try to send a message: org.apache.ws.security.components.crypto.CryptoBase -- Cannot find key for alias: [monit] in keystore of type [jks] from provider [SUN version 1.5] with size [2] and aliases: {other, monit} - Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] org.apache.ws.security.WSSecurityException: Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:60) at org.apache.ws.security.handler.WSHandler.doSenderAction(WSHandler.java:202) at org.apache.ws.axis.security.WSDoAllSender.invoke(WSDoAllSender.java:168) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:127) at org.apache.axis.client.Call.invokeEngine(Call.java:2784) at org.apache.axis.client.Call.invoke(Call.java:2767) at org.apache.axis.client.Call.invoke(Call.java:2443) at org.apache.axis.client.Call.invoke(Call.java:2366) at org.apache.axis.client.Call.invoke(Call.java:1812) at cz.ing.oopf.model.wsclient.ModelWebServiceSoapBindingStub.getStatus(ModelWebServiceSoapBindingStub.java:213) at cz.ing.oopf.wsgemonitor.monitor.util.MonitorUtil.checkStatus(MonitorUtil.java:18) at cz.ing.oopf.wsgemonitor.monitor.Test02WsMonitor.runTest(Test02WsMonitor.java:23) at cz.ing.oopf.wsgemonitor.Main.main(Main.java:75) Caused by: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:721) at org.apache.ws.security.message.WSSecSignature.build(WSSecSignature.java:780) at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:57) ... 15 more Caused by: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.components.crypto.CryptoBase.getPrivateKey(CryptoBase.java:214) at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:713) ... 17 more How to have two certificates for wss4j in the same keystore? why it cannot find my certificate there when i have two certificates in one keystore. I have the same password for both certificates regarding PWCallback (CallbackHandler) My properties file: org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin org.apache.ws.security.crypto.merlin.keystore.type=jks org.apache.ws.security.crypto.merlin.keystore.password=keystore org.apache.ws.security.crypto.merlin.keystore.alias=monit org.apache.ws.security.crypto.merlin.alias.password=*** org.apache.ws.security.crypto.merlin.file=key.jks My client-config.wsdd: <deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org/axis/wsdd/providers/java"> <globalConfiguration> <requestFlow> <handler name="WSSecurity" type="java:org.apache.ws.axis.security.WSDoAllSender"> <parameter name="user" value="monit"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="monit.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> <parameter name="mustUnderstand" value="0"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="session"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="request"/> <parameter name="extension" value=".jwr"/> </handler> </requestFlow> <responseFlow> <handler name="DoSecurityReceiver" type="java:org.apache.ws.axis.security.WSDoAllReceiver"> <parameter name="user" value="other"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="other.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> </handler> </responseFlow> </globalConfiguration> <transport name="http" pivot="java:org.apache.axis.transport.http.HTTPSender"> </transport> </deployment> Listing from keytool: keytool -keystore monit-key.jks -v -list Enter keystore password: Keystore type: JKS Keystore provider: SUN Your keystore contains 2 entries Alias name: other Creation date: Jul 22, 2009 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: .... Alias name: monit Creation date: Oct 19, 2009 Entry type: trustedCertEntry

    Read the article

  • org.apache.struts2.dispatcher.Dispatcher: Could not find action or result Error

    - by peterwkc
    i tried to code the following simple struts but encounter this error during run time. [org.apache.struts2.dispatcher.Dispatcher] Could not find action or result: No result defined for action com.peter.action.LoginAction and result success index.jsp <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <%@ taglib prefix="s" uri="/struts-tags" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Struts Tutorial</title> </head> <body> <h2>Hello Struts</h2> <s:form action="login" > <s:textfield name="username" label="Username:" /> <s:password name="password" label="Password:"/> <s:submit /> </s:form> </body> </html> LoginAction.java /** * */ package com.peter.action; //import org.apache.struts2.convention.annotation.Namespace; import org.apache.struts2.convention.annotation.ResultPath; import org.apache.struts2.convention.annotation.Result; import org.apache.struts2.convention.annotation.Action; import com.opensymphony.xwork2.ActionSupport; /** * @author nicholas_tse * */ //@Namespace("/") To define URL namespace @ResultPath("/") // To instruct Struts where to search result page(jsp) public class LoginAction extends ActionSupport { private String username, password; /** * */ private static final long serialVersionUID = -8992836566328180883L; /** * */ public LoginAction() { // TODO Auto-generated constructor stub } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } @Override @Action(value = "login", results = {@Result(name="success", location="welcome.jsp")}) public String execute() { return SUCCESS; } } /* Remove * struts2-gxp-plugin * struts2-portlet-plugin * struts2-jsf-plugin * struts2-osgi-plugin and its related osgi-plugin * struts-rest-plugin * * Add * velocity-tools-view * * */ web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app id="WebApp_ID" version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <display-name>Struts</display-name> <!-- org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter org.apache.struts2.dispatcher.FilterDispatcher --> <filter> <filter-name>Struts_Filter</filter-name> <filter-class>org.apache.struts2.dispatcher.FilterDispatcher</filter-class> <init-param> <param-name>actionPackages</param-name> <param-value>com.peter.action</param-value> </init-param> </filter> <filter-mapping> <filter-name>Struts_Filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> </web-app> Besides the runtime error, there is deployment error which is ERROR [com.opensymphony.xwork2.util.finder.ClassFinder] (MSC service thread 1-2) Unable to read class [WEB-INF.classes.com.peter.action.LoginAction]: Could not load WEB-INF/classes/com/peter/action/LoginAction.class - [unknown location] at com.opensymphony.xwork2.util.finder.ClassFinder.readClassDef(ClassFinder.java:785) [xwork-core-2.3.1.2.jar:2.3.1.2] AFAIK, the scanning methodology of struts will scan the default packages named struts2 for any annotated class but i have instructed struts2 to scan in com.peter.action using init-param but still unable to find the class. It is pretty weird. Please help. Thanks.

    Read the article

  • ASP.NET MVC & Silverlight development - on Ubuntu

    - by queen3
    I recently moved my working environment from Windows 7 to Ubuntu, and enjoy this every minute of my working day. My work is currently to develop ASP.NET MVC and Silverlight applications. Thus, important Windows stuff is being still run in VirtualBox (such as IIS, MSSQL, Silverlight 3, and legacy COM stuff). For now, I use Visual Studio under VirtualBox as editor/IDE/debugger. But since I prefer Ubuntu fonts and UI much more I'd like to move at least editor (and better IDE) to native Ubuntu. Things I have already done: I store project files in my home folder and run Visual Studio from \vboxsvr share. With few tricks for ASP.NET it works. I use svn on Ubuntu. I test my ASP.NET MVC site using FireFox/FireBug on Ubuntu. What I need on Ubuntu: SQL client to manage MSSQL. I mostly need querying, but I would miss SQL intellisense support. I enjoy command-line svn a lot, but there're times when it's not enough (e.g. view files / check diffs / selectively commit at the sames time) so I wonder if there're any addons - I don't mean replacement for svn, just addons for rare cases like above. I wonder if there're editors that can provide some C# intellisense. Yes I know about MonoDevelop, but will it provide intellisense without compiling (since I'm going to compile remotely in Win box)? And pretty big topic, what's the best way (editor/IDE) to do "folder-based" development? What I mean: The project is /trunk. Everything is there. I don't want to manually add files to project or like that. The project is the folder and files down there. Main task is to edit files of course. I need a quick way to open / search for files in the project. Like in Resharper, I can click Ctrl-Shift-T (IIRC) and just type file name, a list of matching files in the project folder and below is shown. For example, gedit has file tree browser, but I can't quickly type XYZ to find all XYZ files there; moreover it doesn't automatically switch focus to/from editor; so it's more mouse-oriented; I need 100% keyboard way. I need syntax highlighting for C#/HTML/JS. Most importantly, I need HTML tags autocompletion. I can live without it, but I'll be sorry. I need to run compilation remotely (via ssh I think, invoking NAnt script which does MSBuild) and grab results such as errors and warnings, and I'd prefer to quickly go to error line/file. In short, I need to edit/search/open/svn/compile/run files in some folder. Looks like a case for command-line, but imaging I'm in /trunk and want to open file.cs inside /trunk/foo/bar/boo/far, I wouldn't want to type all this path even with bash autocompletion help. I'd prefer to enter :open file.cs, and maybe then select from list of file.cs and file1.cs. Well, maybe I'll add more soon. Actually, I don't have exact requirements; for example, I don't even know if I need to ask for IDE or editor; or should it have svn support integrated or I need to use it from console; do people work with files from console (search/commit/delete/etc) and open editor from there, or they work from editor/IDE and manage files (search/commit/delete) from there? What's better? I have a feeling that vim might have everything I need. I'd like to confirm that, before I spend a lot of time learning it. No I don't want Emacs. Any other IDE? I like Eclipse, but is it good for such stuff? And after all, do you feel like it's a good way to go? I enjoy Ubuntu, enjoy learning new stuff, and Ubuntu + Windows in VirtualBox actually runs faster than Windows7 on my mahcine, but maybe I need to keep development (editor/files management/etc) in Windows/VirtualBox only, leaving other stuff for Ubuntu?

    Read the article

  • Delphi 2009 MS Build headaches

    - by X-Ray
    does anyone know of any good description of delphi's build system? (i know it's using MS Build.) i'm using delphi 2009. i wanted to set up a variation of the Debug build configuration that (often) has different defines (d2009 seems to call them "preprocessor symbols"). the problem i'm having is that--even though i turned off "inherit" for "Base" and "Debug"--have only very limited control. for example, i can't get rid of FastMM_. <PropertyGroup> <ProjectGuid>{D7FE7347-8E2C-438C-A275-38B8DA9244B0}</ProjectGuid> <ProjectVersion>12.0</ProjectVersion> <MainSource>oca.dpr</MainSource> <Config Condition="'$(Config)'==''">Debug</Config> <DCC_DCCCompiler>DCC32</DCC_DCCCompiler> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Base' or '$(Base)'!=''"> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Release' or '$(Cfg_1)'!=''"> <Cfg_1>true</Cfg_1> <CfgParent>Base</CfgParent> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Debug' or '$(Cfg_2)'!=''"> <Cfg_2>true</Cfg_2> <CfgParent>Base</CfgParent> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Base)'!=''"> <DCC_StringChecks>off</DCC_StringChecks> <DCC_MinimumEnumSize>4</DCC_MinimumEnumSize> <DCC_RangeChecking>true</DCC_RangeChecking> <DCC_IntegerOverflowCheck>true</DCC_IntegerOverflowCheck> <DCC_UNIT_PLATFORM>false</DCC_UNIT_PLATFORM> <DCC_SYMBOL_PLATFORM>false</DCC_SYMBOL_PLATFORM> <DCC_DcuOutput>.\dcu</DCC_DcuOutput> <DCC_UnitSearchPath>C:\Prj\Lib\AutoQADocking\Delphi2009.Win32\Lib;$(BDS)\Source\DUnit\src;$(DCC_UnitSearchPath)</DCC_UnitSearchPath> <DCC_Optimize>false</DCC_Optimize> <DCC_DependencyCheckOutputName>oca.exe</DCC_DependencyCheckOutputName> <DCC_ImageBase>00400000</DCC_ImageBase> <DCC_UnitAlias>WinTypes=Windows;WinProcs=Windows;DbiTypes=BDE;DbiProcs=BDE;DbiErrs=BDE;$(DCC_UnitAlias)</DCC_UnitAlias> <DCC_Platform>x86</DCC_Platform> <DCC_E>false</DCC_E> <DCC_N>false</DCC_N> <DCC_S>false</DCC_S> <DCC_F>false</DCC_F> <DCC_K>false</DCC_K> </PropertyGroup> <PropertyGroup Condition="'$(Cfg_1)'!=''"> <DCC_PentiumSafeDivide>true</DCC_PentiumSafeDivide> <DCC_Optimize>true</DCC_Optimize> <DCC_IntegerOverflowCheck>false</DCC_IntegerOverflowCheck> <BRCC_Defines>MadExcept;FastMM;$(BRCC_Defines)</BRCC_Defines> <DCC_AssertionsAtRuntime>false</DCC_AssertionsAtRuntime> <DCC_LocalDebugSymbols>false</DCC_LocalDebugSymbols> <DCC_Define>RELEASE;$(DCC_Define)</DCC_Define> <DCC_SymbolReferenceInfo>0</DCC_SymbolReferenceInfo> <DCC_DebugInformation>false</DCC_DebugInformation> </PropertyGroup> <PropertyGroup Condition="'$(Cfg_2)'!=''"> <DCC_DebugInfoInExe>true</DCC_DebugInfoInExe> <BRCC_Defines>FastMM</BRCC_Defines> <DCC_DebugDCUs>true</DCC_DebugDCUs> <DCC_MapFile>3</DCC_MapFile> <DCC_Define>DEBUG;FastMM_;madExcept;$(DCC_Define)</DCC_Define> </PropertyGroup> i even had to edit it today with notepad to get rid of a DCC define that the delphi UI doesn't seem to give access to. (it said "From Delphi Compiler" for the item i couldn't remove.) does anyone know a good primer on the use of this feature in delphi? thank you!

    Read the article

  • Batch break up after call other batch file

    - by Sven Arno Jopen
    i have the problem, that my batch process breaking up each time after call a other batch file. The batch files are used to run a make process out from IBM Rhapsody. There convert the call from Rhapsody to the Visual Studio tools. So nmake will be called from the batch after make different settings. The scripts aren’t written completely from me, I only adapt theme to run under both windows architecture versions, x86 and x64. The first script (vs2005_make.bat) will be called from Rhapsody and run to the “call” statement. The second script (Vcvars_VisualStudio2005.bat) runs to the end. But the first script isn’t resume working, at this point the process break up without a error message. I'm not very familiar with batch files, this is the first time I make more than simple console commands in a batch file. So I hope I have given all information’s which needed, otherwise ask me. Here the start script (vs2005_make.bat): :: parameter 1 - Makefile which should be used :: parameter 2 - The make target mark @echo off IF "%2"=="" set target=all IF "%2"=="all" set target=all IF "%2"=="build" set target=all IF "%2"=="rebuild" set target=clean all IF "%2"=="clean" set target=clean set RegQry="HKLM\Hardware\Description\System\CentralProcessor\0" REG.exe Query %RegQry% > checkOS.txt Find /i "x86" < CheckOS.txt > StringCheck.txt IF %ERRORLEVEL%==0 ( set arch=x86 ) ELSE ( set arch=x64 ) call "%ProgramFiles%\IBM\Rhapsody752\Share\etc\Vcvars_VisualStudio2005.bat" %arch% IF %ERRORLEVEL%==0 ( set makeflags= nmake /nologo /S /F %1 %target% ) del checkOS.txt del StringCheck.txt exit and here the called script (Vcvars_VisualStudio2005.bat): :: param 1 - Processor architecture @echo off ECHO param 1 = %1 IF %1==x86 ( SET ProgrammPath=%ProgramFiles% ) ELSE IF %1==x64 ( SET ProgrammPath=%ProgramFiles(x86)% ) ELSE ( ECHO Unknowen architectur EXIT /B 1 ) SET VSINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8\Common7\IDE" SET VCINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8" SET FrameworkDir=C:\WINDOWS\Microsoft.NET\Framework SET FrameworkVersion=v2.0.50727 SET FrameworkSDKDir="%ProgrammPath%\Microsoft Visual Studio 8\SDK\v2.0" rem Root of Visual Studio common files. IF %VSINSTALLDIR%=="" GOTO Usage IF %VCINSTALLDIR%=="" SET VCINSTALLDIR=%VSINSTALLDIR% rem rem Root of Visual Studio ide installed files. rem SET DevEnvDir=%VSINSTALLDIR% rem rem Root of Visual C++ installed files. rem SET MSVCDir=%VCINSTALLDIR%\VC SET PATH=%DevEnvDir%;%MSVCDir%\BIN;%VCINSTALLDIR%\Common7\Tools;%VCINSTALLDIR%Common7 \Tools\bin\prerelease;%VCINSTALLDIR%\Common7\Tools\bin;%FrameworkSDKDir%\bin;%FrameworkDir%\%FrameworkVersion%;%PATH%; SET INCLUDE=%MSVCDir%\ATLMFC\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\PlatformSDK\include\gl;%MSVCDir%\PlatformSDK\include;%FrameworkSDKDir%\include;%INCLUDE% SET LIB=%MSVCDir%\ATLMFC\LIB;%MSVCDir%\LIB;%MSVCDir%\PlatformSDK\lib;%FrameworkSDKDir%\lib;%LIB% GOTO end :Usage ECHO. VSINSTALLDIR variable is not set. ECHO. ECHO SYNTAX: %0 GOTO end :end Here the console output, what I not understand here and find very suspicious is that after the “IF %ERRORLEVEL%” Statement in the first script all will be put out regardless the echo is set to off… Executing: "C:\Programme\IBM\Rhapsody752\Share\etc\vs2005_make.bat" Simulation.mak build IF %ERRORLEVEL%==0 ( Mehr? set arch=x86 Mehr? ) ELSE ( Mehr? set arch=x64 Mehr? ) call "%ProgramFiles%\IBM\Rhapsody752\Share\etc\Vcvars_VisualStudio2005.bat" %arch% :: param 1 - Processor architecture @echo off ECHO param 1 = %1 param 1 = x86 IF %1==x86 ( Mehr? SET ProgrammPath=%ProgramFiles% Mehr? ) ELSE IF %1==x64 ( Mehr? SET ProgrammPath=%ProgramFiles(x86)% Mehr? ) ELSE ( Mehr? ECHO Unknowen architectur Mehr? EXIT /B 1 Mehr? ) SET VSINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8\Common7\IDE" SET VCINSTALLDIR="%ProgrammPath%\Microsoft Visual Studio 8" SET FrameworkDir=C:\WINDOWS\Microsoft.NET\Framework SET FrameworkVersion=v2.0.50727 SET FrameworkSDKDir="%ProgrammPath%\Microsoft Visual Studio 8\SDK\v2.0" rem Root of Visual Studio common files. IF %VSINSTALLDIR%=="" GOTO Usage IF %VCINSTALLDIR%=="" SET VCINSTALLDIR=%VSINSTALLDIR% rem rem Root of Visual Studio ide installed files. rem SET DevEnvDir=%VSINSTALLDIR% rem rem Root of Visual C++ installed files. rem SET MSVCDir=%VCINSTALLDIR%\VC SET PATH=%DevEnvDir%;%MSVCDir%\BIN;%VCINSTALLDIR%\Common7\Tools;%VCINSTALLDIR%\Common7\Tools\bin\prerelease;%VCINSTALLDIR%\Common7\Tools\bin;%FrameworkSDKDir%\bin;%FrameworkDir%\%FrameworkVersion%;%PATH%; SET INCLUDE=%MSVCDir%\ATLMFC\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\PlatformSDK\include\gl;%MSVCDir%\PlatformSDK\include;%FrameworkSDKDir%\include;%INCLUDE% SET LIB=%MSVCDir%\ATLMFC\LIB;%MSVCDir%\LIB;%MSVCDir%\PlatformSDK\lib;%FrameworkSDKDir%\lib;%LIB% GOTO end Build Done I hope someone have an idea, i work now since two days and don't find the error... thanking you in anticipation. Note: The word “Mehr?” in the text output is german and means “more”. I don't know were it comes from and it is possible that it is a bad translation from the English output to german.

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • Uploadify Minimum Image Width And Height

    - by Richard Knop
    So I am using the Uplodify plugin to allow users to upload multiple images at once. The problem is I need to set a minimum width and height for images. Let's say 150x150px is the smallest image users can upload. How can I set this limitation in the Uploadify plugin? When user tries to upload smaller picture, I would like to display some error message as well. Here is the PHP file that is called bu the plugin to upload images: <?php define('BASE_PATH', substr(dirname(dirname(__FILE__)), 0, -22)); // set the include path set_include_path(BASE_PATH . '/../library' . PATH_SEPARATOR . BASE_PATH . '/library' . PATH_SEPARATOR . get_include_path()); // autoload classes from the library function __autoload($class) { include str_replace('_', '/', $class) . '.php'; } $configuration = new Zend_Config_Ini(BASE_PATH . '/application' . '/configs/application.ini', 'development'); $dbAdapter = Zend_Db::factory($configuration->database); Zend_Db_Table_Abstract::setDefaultAdapter($dbAdapter); function _getTable($table) { include BASE_PATH . '/application/modules/default/models/' . $table . '.php'; return new $table(); } $albums = _getTable('Albums'); $media = _getTable('Media'); if (false === empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $extension = end(explode('.', $_FILES['Filedata']['name'])); // insert temporary row into the database $data = array(); $data['type'] = 'photo'; $data['type2'] = 'public'; $data['status'] = 'temporary'; $data['user_id'] = $_REQUEST['user_id']; $paths = $media->add($data, $extension, $dbAdapter); // save the photo move_uploaded_file($tempFile, BASE_PATH . '/public/' . $paths[0]); // create a thumbnail include BASE_PATH . '/library/My/PHPThumbnailer/ThumbLib.inc.php'; $thumb = PhpThumbFactory::create(BASE_PATH . '/public/' . $paths[0]); $thumb->adaptiveResize(85, 85); $thumb->save(BASE_PATH . '/public/' . $paths[1]); // add watermark to the bottom right corner $pathToFullImage = BASE_PATH . '/public/' . $paths[0]; $size = getimagesize($pathToFullImage); switch ($extension) { case 'gif': $im = imagecreatefromgif($pathToFullImage); break; case 'jpg': $im = imagecreatefromjpeg($pathToFullImage); break; case 'png': $im = imagecreatefrompng($pathToFullImage); break; } if (false !== $im) { $white = imagecolorallocate($im, 255, 255, 255); $font = BASE_PATH . '/public/fonts/arial.ttf'; imagefttext($im, 13, // font size 0, // angle $size[0] - 132, // x axis (top left is [0, 0]) $size[1] - 13, // y axis $white, $font, 'HunnyHive.com'); switch ($extension) { case 'gif': imagegif($im, $pathToFullImage); break; case 'jpg': imagejpeg($im, $pathToFullImage, 100); break; case 'png': imagepng($im, $pathToFullImage, 0); break; } imagedestroy($im); } echo "1"; } And here's the javascript: $(document).ready(function() { $('#photo').uploadify({ 'uploader' : '/flash-uploader/scripts/uploadify.swf', 'script' : '/flash-uploader/scripts/upload-public-photo.php', 'cancelImg' : '/flash-uploader/cancel.png', 'scriptData' : {'user_id' : 'USER_ID'}, 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'sizeLimit' : 2097152, 'fileExt' : '*.jpg;*.jpeg;*.gif;*.png', 'wmode' : 'transparent', 'onComplete' : function() { $.get('/my-account/temporary-public-photos', function(data) { $('#temporaryPhotos').html(data); }); } }); $('#upload_public_photo').hover(function() { var titles = '{'; $('.title').each(function() { var title = $(this).val(); if ('Title...' != title) { var id = $(this).attr('name'); id = id.substr(5); title = jQuery.trim(title); if (titles.length > 1) { titles += ','; } titles += '"' + id + '"' + ':"' + title + '"'; } }); titles += '}'; $('#titles').val(titles); }); }); Now bear in mind that I know how to check images dimensions in the PHP file. But I'm not sure how to modify the javascript so it won't upload images with very small dimensions.

    Read the article

  • IOC Container Handling State Params in Non-Default Constructor

    - by Mystagogue
    For the purpose of this discussion, there are two kinds of parameters an object constructor might take: state dependency or service dependency. Supplying a service dependency with an IOC container is easy: DI takes over. But in contrast, state dependencies are usually only known to the client. That is, the object requestor. It turns out that having a client supply the state params through an IOC Container is quite painful. I will show several different ways to do this, all of which have big problems, and ask the community if there is another option I'm missing. Let's begin: Before I added an IOC container to my project code, I started with a class like this: class Foobar { //parameters are state dependencies, not service dependencies public Foobar(string alpha, int omega){...}; //...other stuff } I decide to add a Logger service depdendency to the Foobar class, which perhaps I'll provide through DI: class Foobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } But then I'm also told I need to make class Foobar itself "swappable." That is, I'm required to service-locate a Foobar instance. I add a new interface into the mix: class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } When I make the service locator call, it will DI the ILogger service dependency for me. Unfortunately the same is not true of the state dependencies Alpha and Omega. Some containers offer a syntax to address this: //Unity 2.0 pseudo-ish code: myContainer.Resolve<IFoobar>( new parameterOverride[] { {"alpha", "one"}, {"omega",2} } ); I like the feature, but I don't like that it is untyped and not evident to the developer what parameters must be passed (via intellisense, etc). So I look at another solution: //This is a "boiler plate" heavy approach! class Foobar : IFoobar { public Foobar (string alpha, int omega){...}; //...stuff } class FoobarFactory : IFoobarFactory { public IFoobar IFoobarFactory.Create(string alpha, int omega){ return new Foobar(alpha, omega); } } //fetch it... myContainer.Resolve<IFoobarFactory>().Create("one", 2); The above solves the type-safety and intellisense problem, but it (1) forced class Foobar to fetch an ILogger through a service locator rather than DI and (2) it requires me to make a bunch of boiler-plate (XXXFactory, IXXXFactory) for all varieties of Foobar implementations I might use. Should I decide to go with a pure service locator approach, it may not be a problem. But I still can't stand all the boiler-plate needed to make this work. So then I try this: //code named "concrete creator" class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; static IFoobar Create(string alpha, int omega){ //unity 2.0 pseudo-ish code. Assume a common //service locator, or singleton holds the container... return Container.Resolve<IFoobar>( new parameterOverride[] {{"alpha", alpha},{"omega", omega} } ); } //Get my instance: Foobar.Create("alpha",2); I actually don't mind that I'm using the concrete "Foobar" class to create an IFoobar. It represents a base concept that I don't expect to change in my code. I also don't mind the lack of type-safety in the static "Create", because it is now encapsulated. My intellisense is working too! Any concrete instance made this way will ignore the supplied state params if they don't apply (a Unity 2.0 behavior). Perhaps a different concrete implementation "FooFoobar" might have a formal arg name mismatch, but I'm still pretty happy with it. But the big problem with this approach is that it only works effectively with Unity 2.0 (a mismatched parameter in Structure Map will throw an exception). So it is good only if I stay with Unity. The problem is, I'm beginning to like Structure Map a lot more. So now I go onto yet another option: class Foobar : IFoobar, IFoobarInit { public Foobar(ILogger log){...}; public IFoobar IFoobarInit.Initialize(string alpha, int omega){ this.alpha = alpha; this.omega = omega; return this; } } //now create it... IFoobar foo = myContainer.resolve<IFoobarInit>().Initialize("one", 2) Now with this I've got a somewhat nice compromise with the other approaches: (1) My arguments are type-safe / intellisense aware (2) I have a choice of fetching the ILogger via DI (shown above) or service locator, (3) there is no need to make one or more seperate concrete FoobarFactory classes (contrast with the verbose "boiler-plate" example code earlier), and (4) it reasonably upholds the principle "make interfaces easy to use correctly, and hard to use incorrectly." At least it arguably is no worse than the alternatives previously discussed. One acceptance barrier yet remains: I also want to apply "design by contract." Every sample I presented was intentionally favoring constructor injection (for state dependencies) because I want to preserve "invariant" support as most commonly practiced. Namely, the invariant is established when the constructor completes. In the sample above, the invarient is not established when object construction completes. As long as I'm doing home-grown "design by contract" I could just tell developers not to test the invariant until the Initialize(...) method is called. But more to the point, when .net 4.0 comes out I want to use its "code contract" support for design by contract. From what I read, it will not be compatible with this last approach. Curses! Of course it also occurs to me that my entire philosophy is off. Perhaps I'd be told that conjuring a Foobar : IFoobar via a service locator implies that it is a service - and services only have other service dependencies, they don't have state dependencies (such as the Alpha and Omega of these examples). I'm open to listening to such philosophical matters as well, but I'd also like to know what semi-authorative reference to read that would steer me down that thought path. So now I turn it to the community. What approach should I consider that I havn't yet? Must I really believe I've exhausted my options?

    Read the article

  • Generic Http Module

    - by MartinF
    The problem I am trying to make a generic http module in asp.net C# for handling roles defined by an enum which i want to be able to change by a generic parameter. This will make it possible to use the generic module with any kind of enum defined for each project. The module hooks into the Authenticate event of the FormsAuthenticationModule, and is called on each request to the website. The module exposes public events which could be defined in the global.asax. But i cant seem to figure out how to make the generic http module work like a non generic module. There is 3 main problems. I cant register the generic http module in the web.config like any other module as i cant specify the generic parameter, or is possible somehow ? The way to solve that as far as i can figure out is to create a non-generic http module that intializes the generic HttpModule (the generic parameter is defined in a custom section for the module in the web.config). But that introduces the next problem. I cant find out how to make the public events exposed by the generic module available to hook into through the global.asax as you would normally do with a non-generic module by just making a public method with the name like ModuleClassName_PublicEventName. The init() method on the http module gets an reference to the HttpApplication object created in the global.asax. I dont know if it somehow could be possible with reflection to search for the methods and if they are defined in the global.asax (HttpApplication super class) hook them up with the correct event handler ? or if any methods on the HttpApplication object can be used? How would i store and later get a reference to the generic module created in the non-generic module ? I can get the non-generic module with HttpContext.Current.ApplicationInstance.Modules.Get("TheModule"); but is there any way i can store a reference to the generic module in the non-generic module (cant figure out how it should be possible), or store it somewhere else so i can always get it? If I can get a reference to the generic module from the global.asax etc. the events mentioned in nr. 2 can be manually wired to the methods. Thoughts and other possible solutions Instead of registering the module in the web.config it can be manually initialized by overridding the Init method of the HttpApplication and calling the Init method on the module. But that will introduce some new problems. The module will no longer be added to the the ModulesCollection. So I will need to store a reference somewhere else. This could be done with a property in the global.asax, and by implementing an interface, or by creating an generic abstract base type inheriting from HttpApplication, that the global.asax could inherit from. In the generic abstract base type i could also override the init method. It will still not automatically hook up methods in the global.asax with events in the generic module. If it is possible with reflection to search for defined methods in the super type of the HttpApplication it could be automatically done that way. But i can wire the methods in the global.asax with the events in the generic module manually either in the Init method or anywhere else by getting reference to the generic module. It doesnst really need to implement the IHttpModule interface if i choose to manually initalize the generic module. I could just aswell move all the code to the abstract base type inheriting from the HttpApplication. I would prefer to register the module simply by defining it in the web.config as it will be the easiest and most natural / logical solution. Also it would be great if it could be kept as a HttpModule instead of having to define a an abstract base type inheriting from HttpApplication, else it will be more thighed up and not as loose and plugable as i wanted it to be (but maybe it is not possible). Another alternative would be to make it all static. As far as i can figure out i would have to somehow make sure that only one method can be added to the public static events, so it wont add a reference each time a new instance of the global.asax is created. I simply cant find out what is the best solution. I have been messing around with this and thinking about it for days now. Maybe there is an option that i havent thought of ? Hope anyone out there can help me.

    Read the article

  • apache mod_rewrite rule in httpd.conf for modifying some paths, but not others

    - by wallyk
    I'm having quite a challenge creating an appropriate rewrite rule for Apache/2.2.14 on Fedora 10. I'm working through the CodeIgniter-Doctrine tutorial which uses an .htaccess file. (Search for Removing “index.php” from CodeIgniter urls about 10% down.) But since that's not recommended for a production server, I'm trying to tweak it to work in /etc/httpd/conf/httpd.conf. <VirtualHost *:80> ServerName ci_doctrine DocumentRoot /var/www/html/ci_doctrine ErrorLog /var/log/httpd/cid-error_log CustomLog /var/log/httpd/cid-access_log common <IfModule mod_rewrite.c> RewriteEngine on RewriteLog /var/log/httpd/cid_rewrite RewriteLogLevel 9 # RewriteCond ^/css/style.css$ (these have bad syntax, but that's beside the point) # RewriteRule ^/css/style.css$ /css/style.css [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> <IfModule !mod_rewrite.c> ErrorDocument 404 /ci_doctrine/index.php </IfModule> </VirtualHost> It seems like the tutorial .htaccess rules properly test for existing files and then not alter the URL in such cases, but the rewrite log says that the conditions are true (that is, the file does not exist) even though it's there. 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) init rewrite engine with requested uri /login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (3) applying pattern '^(.*)$' to uri '/login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/login' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/login' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) rewrite '/login' -> '/index.php//login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) local path result: /index.php//login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (1) go-ahead with /var/www/html/ci_doctrine/index.php/login [OK] 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) init rewrite engine with requested uri /login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (3) applying pattern '^(.*)$' to uri '/login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/login' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/login' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) rewrite '/login' -> '/index.php//login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) local path result: /index.php//login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (1) go-ahead with /var/www/html/ci_doctrine/index.php/login [OK] 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) init rewrite engine with requested uri /css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (3) applying pattern '^(.*)$' to uri '/css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/css/style.css' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/css/style.css' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) rewrite '/css/style.css' -> '/index.php//css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) local path result: /index.php//css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (1) go-ahead with /var/www/html/ci_doctrine/index.php/css/style.css [OK] 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) init rewrite engine with requested uri /css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (3) applying pattern '^(.*)$' to uri '/css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/css/style.css' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/css/style.css' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) rewrite '/css/style.css' -> '/index.php//css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) local path result: /index.php//css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (1) go-ahead with /var/www/html/ci_doctrine/index.php/css/style.css [OK] The file .../css/style.css was working properly before I started messing around with the rewrite rules, so it should be in the right place. But now the path is always munged up by the rewriting, though the virtual components below index.php are properly translated. What am I doing wrong?

    Read the article

  • How to access members of an rdf list with rdflib (or plain sparql)

    - by tjb
    What is the best way to access the members of an rdf list? I'm using rdflib (python) but an answer given in plain SPARQL is also ok (this type of answer can be used through rdfextras, a rdflib helper library). I'm trying to access the authors of a particular journal article in rdf produced by Zotero (some fields have been removed for brevity): <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:z="http://www.zotero.org/namespaces/export#" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:bib="http://purl.org/net/biblio#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:prism="http://prismstandard.org/namespaces/1.2/basic/" xmlns:link="http://purl.org/rss/1.0/modules/link/"> <bib:Article rdf:about="http://www.ncbi.nlm.nih.gov/pubmed/18273724"> <z:itemType>journalArticle</z:itemType> <dcterms:isPartOf rdf:resource="urn:issn:0954-6634"/> <bib:authors> <rdf:Seq> <rdf:li> <foaf:Person> <foaf:surname>Lee</foaf:surname> <foaf:givenname>Hyoun Seung</foaf:givenname> </foaf:Person> </rdf:li> <rdf:li> <foaf:Person> <foaf:surname>Lee</foaf:surname> <foaf:givenname>Jong Hee</foaf:givenname> </foaf:Person> </rdf:li> <rdf:li> <foaf:Person> <foaf:surname>Ahn</foaf:surname> <foaf:givenname>Gun Young</foaf:givenname> </foaf:Person> </rdf:li> <rdf:li> <foaf:Person> <foaf:surname>Lee</foaf:surname> <foaf:givenname>Dong Hun</foaf:givenname> </foaf:Person> </rdf:li> <rdf:li> <foaf:Person> <foaf:surname>Shin</foaf:surname> <foaf:givenname>Jung Won</foaf:givenname> </foaf:Person> </rdf:li> <rdf:li> <foaf:Person> <foaf:surname>Kim</foaf:surname> <foaf:givenname>Dong Hyun</foaf:givenname> </foaf:Person> </rdf:li> <rdf:li> <foaf:Person> <foaf:surname>Chung</foaf:surname> <foaf:givenname>Jin Ho</foaf:givenname> </foaf:Person> </rdf:li> </rdf:Seq> </bib:authors> <dc:title>Fractional photothermolysis for the treatment of acne scars: a report of 27 Korean patients</dc:title> <dcterms:abstract>OBJECTIVES: Atrophic post-acne scarring remains a therapeutically challe *CUT*, erythema and edema. CONCLUSIONS: The 1550-nm erbium-doped FP is associated with significant patient-reported improvement in the appearance of acne scars, with minimal downtime.</dcterms:abstract> <bib:pages>45-49</bib:pages> <dc:date>2008</dc:date> <z:shortTitle>Fractional photothermolysis for the treatment of acne scars</z:shortTitle> <dc:identifier> <dcterms:URI> <rdf:value>http://www.ncbi.nlm.nih.gov/pubmed/18273724</rdf:value> </dcterms:URI> </dc:identifier> <dcterms:dateSubmitted>2010-12-06 11:36:52</dcterms:dateSubmitted> <z:libraryCatalog>NCBI PubMed</z:libraryCatalog> <dc:description>PMID: 18273724</dc:description> </bib:Article> <bib:Journal rdf:about="urn:issn:0954-6634"> <dc:title>The Journal of Dermatological Treatment</dc:title> <prism:volume>19</prism:volume> <prism:number>1</prism:number> <dcterms:alternative>J Dermatolog Treat</dcterms:alternative> <dc:identifier>DOI 10.1080/09546630701691244</dc:identifier> <dc:identifier>ISSN 0954-6634</dc:identifier> </bib:Journal>

    Read the article

  • mfc tab control switch tabs

    - by MRM
    I created a simple tab control that has 2 tabs (each tab is a different dialog). The thing is that i don't have any idea how to switch between tabs (when the user presses Titlu Tab1 to show the dialog i made for the first tab, and when it presses Titlu Tab2 to show my other dialog). I added a handler for changing items, but i don't know how should i acces some kind of index or child for tabs. Tab1.h and Tab2.h are headers for dialogs that show only static texts with the name of the each tab. There may be an obvious answer to my question, but i am a real newbie in c++ and MFC. This is my header: // CTabControlDlg.h : header file // #pragma once #include "afxcmn.h" #include "Tab1.h" #include "Tab2.h" // CCTabControlDlg dialog class CCTabControlDlg : public CDialog { // Construction public: CCTabControlDlg(CWnd* pParent = NULL); // standard constructor // Dialog Data enum { IDD = IDD_CTABCONTROL_DIALOG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support // Implementation protected: HICON m_hIcon; // Generated message map functions virtual BOOL OnInitDialog(); afx_msg void OnSysCommand(UINT nID, LPARAM lParam); afx_msg void OnPaint(); afx_msg HCURSOR OnQueryDragIcon(); DECLARE_MESSAGE_MAP() public: CTabCtrl m_tabcontrol1; CTab1 m_tab1; CTab2 m_tab2; afx_msg void OnTcnSelchangeTabcontrol(NMHDR *pNMHDR, LRESULT *pResult); }; And this is the .cpp: // CTabControlDlg.cpp : implementation file // #include "stdafx.h" #include "CTabControl.h" #include "CTabControlDlg.h" #ifdef _DEBUG #define new DEBUG_NEW #endif // CAboutDlg dialog used for App About class CAboutDlg : public CDialog { public: CAboutDlg(); // Dialog Data enum { IDD = IDD_ABOUTBOX }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support // Implementation protected: DECLARE_MESSAGE_MAP() }; CAboutDlg::CAboutDlg() : CDialog(CAboutDlg::IDD) { } void CAboutDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); } BEGIN_MESSAGE_MAP(CAboutDlg, CDialog) END_MESSAGE_MAP() // CCTabControlDlg dialog CCTabControlDlg::CCTabControlDlg(CWnd* pParent /*=NULL*/) : CDialog(CCTabControlDlg::IDD, pParent) { m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME); } void CCTabControlDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); DDX_Control(pDX, IDC_TABCONTROL, m_tabcontrol1); } BEGIN_MESSAGE_MAP(CCTabControlDlg, CDialog) ON_WM_SYSCOMMAND() ON_WM_PAINT() ON_WM_QUERYDRAGICON() //}}AFX_MSG_MAP ON_NOTIFY(TCN_SELCHANGE, IDC_TABCONTROL, &CCTabControlDlg::OnTcnSelchangeTabcontrol) END_MESSAGE_MAP() // CCTabControlDlg message handlers BOOL CCTabControlDlg::OnInitDialog() { CDialog::OnInitDialog(); // Add "About..." menu item to system menu. // IDM_ABOUTBOX must be in the system command range. ASSERT((IDM_ABOUTBOX & 0xFFF0) == IDM_ABOUTBOX); ASSERT(IDM_ABOUTBOX < 0xF000); CMenu* pSysMenu = GetSystemMenu(FALSE); if (pSysMenu != NULL) { CString strAboutMenu; strAboutMenu.LoadString(IDS_ABOUTBOX); if (!strAboutMenu.IsEmpty()) { pSysMenu->AppendMenu(MF_SEPARATOR); pSysMenu->AppendMenu(MF_STRING, IDM_ABOUTBOX, strAboutMenu); } } // Set the icon for this dialog. The framework does this automatically // when the application's main window is not a dialog SetIcon(m_hIcon, TRUE); // Set big icon SetIcon(m_hIcon, FALSE); // Set small icon // TODO: Add extra initialization here CTabCtrl* pTabCtrl = (CTabCtrl*)GetDlgItem(IDC_TABCONTROL); m_tab1.Create(IDD_TAB1, pTabCtrl); TCITEM item1; item1.mask = TCIF_TEXT | TCIF_PARAM; item1.lParam = (LPARAM)& m_tab1; item1.pszText = _T("Titlu Tab1"); pTabCtrl->InsertItem(0, &item1); //Pozitionarea dialogului CRect rcItem; pTabCtrl->GetItemRect(0, &rcItem); m_tab1.SetWindowPos(NULL, rcItem.left, rcItem.bottom + 1, 0, 0, SWP_NOSIZE | SWP_NOZORDER ); m_tab1.ShowWindow(SW_SHOW); // al doilea tab m_tab2.Create(IDD_TAB2, pTabCtrl); TCITEM item2; item2.mask = TCIF_TEXT | TCIF_PARAM; item2.lParam = (LPARAM)& m_tab1; item2.pszText = _T("Titlu Tab2"); pTabCtrl->InsertItem(0, &item2); //Pozitionarea dialogului //CRect rcItem; pTabCtrl->GetItemRect(0, &rcItem); m_tab2.SetWindowPos(NULL, rcItem.left, rcItem.bottom + 1, 0, 0, SWP_NOSIZE | SWP_NOZORDER ); m_tab2.ShowWindow(SW_SHOW); return TRUE; // return TRUE unless you set the focus to a control } void CCTabControlDlg::OnSysCommand(UINT nID, LPARAM lParam) { if ((nID & 0xFFF0) == IDM_ABOUTBOX) { CAboutDlg dlgAbout; dlgAbout.DoModal(); } else { CDialog::OnSysCommand(nID, lParam); } } // If you add a minimize button to your dialog, you will need the code below // to draw the icon. For MFC applications using the document/view model, // this is automatically done for you by the framework. void CCTabControlDlg::OnPaint() { if (IsIconic()) { CPaintDC dc(this); // device context for painting SendMessage(WM_ICONERASEBKGND, reinterpret_cast<WPARAM>(dc.GetSafeHdc()), 0); // Center icon in client rectangle int cxIcon = GetSystemMetrics(SM_CXICON); int cyIcon = GetSystemMetrics(SM_CYICON); CRect rect; GetClientRect(&rect); int x = (rect.Width() - cxIcon + 1) / 2; int y = (rect.Height() - cyIcon + 1) / 2; // Draw the icon dc.DrawIcon(x, y, m_hIcon); } else { CDialog::OnPaint(); } } // The system calls this function to obtain the cursor to display while the user drags // the minimized window. HCURSOR CCTabControlDlg::OnQueryDragIcon() { return static_cast<HCURSOR>(m_hIcon); } void CCTabControlDlg::OnTcnSelchangeTabcontrol(NMHDR *pNMHDR, LRESULT *pResult) { // TODO: Add your control notification handler code here *pResult = 0; }

    Read the article

  • Android programming - How to acces [to draw on] XML view in main.xml layout, using code

    - by user556248
    Ok I'm a newbie at Android programming, have a hard time with the graphics part. Understand the beauty of creating layout in XML file, but lost how to access various elements, especially a View element to draw on it. See example of my layout main.xml here; <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/root" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/Title" android:text="App title" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="#000000" android:background="#A0A0FF"/> </LinearLayout> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/PaperLayout" android:layout_width="fill_parent" android:layout_height="0dp" android:layout_weight="1" android:orientation="horizontal" android:focusable="true"> <View android:id="@+id/Paper" android:layout_width="fill_parent" android:layout_height="fill_parent" /> </LinearLayout> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content"> <Button android:id="@+id/File" android:layout_width="fill_parent" android:layout_weight="1" android:layout_height="34dp" android:layout_alignParentTop="true" android:layout_centerInParent="true" android:clickable="true" android:textSize="10sp" android:text="File" /> <Button android:id="@+id/Edit" android:layout_width="fill_parent" android:layout_weight="1" android:layout_height="34dp" android:clickable="true" android:textSize="10sp" android:text="Edit" /> </LinearLayout> </LinearLayout> As you can see I have a custom app title bar, then a View filling middle, and finally two buttons in bottom. Catching button events and responding to, for example changing title bar text and changing View background color works fine, but how the heck do I access and more importantly draw on the view defined in main.xml UPDATE: That for your suggestion, however besides that I need a View, not ImageView and you are missing a parameter on canvas.drawText and an ending bracket, it does not work. Now this is most likely because you missed the fact that I am a newbie and assuming I can fill in any blanks. Now first of all I do NOT understand why in my main.xml layout file I can create a View or even a SurfaceView element, which is what I need, but according to your solution I don't even specify the View like Anyways I edited my main.xml according to your solution, and slimmed it down a bit for simplicity; <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/root" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/Title" android:text="App title" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="#000000" android:background="#A0A0FF"/> </LinearLayout> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/PaperLayout" android:layout_width="fill_parent" android:layout_height="0dp" android:layout_weight="1" android:orientation="horizontal" android:focusable="true"> <com.example.MyApp.CustomView android:id="@+id/Paper" android:layout_width="fill_parent" android:layout_height="fill_parent" /> <com.example.colorbook.CustomView/> </LinearLayout> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content"> <Button android:id="@+id/File" android:layout_width="fill_parent" android:layout_weight="1" android:layout_height="34dp" android:layout_alignParentTop="true" android:layout_centerInParent="true" android:clickable="true" android:textSize="10sp" android:text="File" /> </LinearLayout> </LinearLayout> In my main java file, MyApp.java, I added this after OnCreate; public class CustomView extends ImageView { @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.drawText("Your Text", 1, 1, null); } } But I get error on the "CustomView" part; "Implicit super constructor ImageView() is undefined for default constructor.Must define an explicit constructor" Eclipse suggests 3 quick fixes about adding constructor, but none helps, well it removes error but gives error on app when running. I hope somebody can break this down for me and provide a solution, and perhaps explain why I can't just create a View element in my main.xml layotu file and draw on it in code.

    Read the article

  • What strategy do you use for package naming in Java projects and why?

    - by Tim Visher
    I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app. As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows: Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical. This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package. I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system. So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on. Thanks in advance!

    Read the article

  • Scala n00b: Critique my code

    - by Peter
    G'day everyone, I'm a Scala n00b (but am experienced with other languages) and am learning the language as I find time - very much enjoying it so far! Usually when learning a new language the first thing I do is implement Conway's Game of Life, since it's just complex enough to give a good sense of the language, but small enough in scope to be able to whip up in a couple of hours (most of which is spent wrestling with syntax). Anyhoo, having gone through this exercise with Scala I was hoping the Scala gurus out there might take a look at the code I've ended up with and provide feedback on it. I'm after anything - algorithmic improvements (particularly concurrent solutions!), stylistic improvements, alternative APIs or language constructs, disgust at the length of my function names - whatever feedback you've got, I'm keen to hear it! You should be able to run the following script via "scala GameOfLife.scala" - by default it will run a 20x20 board with a single glider on it - please feel free to experiment. // CONWAY'S GAME OF LIFE (SCALA) abstract class GameOfLifeBoard(val aliveCells : Set[Tuple2[Int, Int]]) { // Executes a "time tick" - returns a new board containing the next generation def tick : GameOfLifeBoard // Is the board empty? def empty : Boolean = aliveCells.size == 0 // Is the given cell alive? protected def alive(cell : Tuple2[Int, Int]) : Boolean = aliveCells contains cell // Is the given cell dead? protected def dead(cell : Tuple2[Int, Int]) : Boolean = !alive(cell) } class InfiniteGameOfLifeBoard(aliveCells : Set[Tuple2[Int, Int]]) extends GameOfLifeBoard(aliveCells) { // Executes a "time tick" - returns a new board containing the next generation override def tick : GameOfLifeBoard = new InfiniteGameOfLifeBoard(nextGeneration) // The next generation of this board protected def nextGeneration : Set[Tuple2[Int, Int]] = aliveCells flatMap neighbours filter shouldCellLiveInNextGeneration // Should the given cell should live in the next generation? protected def shouldCellLiveInNextGeneration(cell : Tuple2[Int, Int]) : Boolean = (alive(cell) && (numberOfAliveNeighbours(cell) == 2 || numberOfAliveNeighbours(cell) == 3)) || (dead(cell) && numberOfAliveNeighbours(cell) == 3) // The number of alive neighbours for the given cell protected def numberOfAliveNeighbours(cell : Tuple2[Int, Int]) : Int = aliveNeighbours(cell) size // Returns the alive neighbours for the given cell protected def aliveNeighbours(cell : Tuple2[Int, Int]) : Set[Tuple2[Int, Int]] = aliveCells intersect neighbours(cell) // Returns all neighbours (whether dead or alive) for the given cell protected def neighbours(cell : Tuple2[Int, Int]) : Set[Tuple2[Int, Int]] = Set((cell._1-1, cell._2-1), (cell._1, cell._2-1), (cell._1+1, cell._2-1), (cell._1-1, cell._2), (cell._1+1, cell._2), (cell._1-1, cell._2+1), (cell._1, cell._2+1), (cell._1+1, cell._2+1)) // Information on where the currently live cells are protected def xVals = aliveCells map { cell => cell._1 } protected def xMin = (xVals reduceLeft (_ min _)) - 1 protected def xMax = (xVals reduceLeft (_ max _)) + 1 protected def xRange = xMin until xMax + 1 protected def yVals = aliveCells map { cell => cell._2 } protected def yMin = (yVals reduceLeft (_ min _)) - 1 protected def yMax = (yVals reduceLeft (_ max _)) + 1 protected def yRange = yMin until yMax + 1 // Returns a simple graphical representation of this board override def toString : String = { var result = "" for (y <- yRange) { for (x <- xRange) { if (alive (x,y)) result += "# " else result += ". " } result += "\n" } result } // Equality stuff override def equals(other : Any) : Boolean = { other match { case that : InfiniteGameOfLifeBoard => (that canEqual this) && that.aliveCells == this.aliveCells case _ => false } } def canEqual(other : Any) : Boolean = other.isInstanceOf[InfiniteGameOfLifeBoard] override def hashCode = aliveCells.hashCode } class FiniteGameOfLifeBoard(val boardWidth : Int, val boardHeight : Int, aliveCells : Set[Tuple2[Int, Int]]) extends InfiniteGameOfLifeBoard(aliveCells) { override def tick : GameOfLifeBoard = new FiniteGameOfLifeBoard(boardWidth, boardHeight, nextGeneration) // Determines the coordinates of all of the neighbours of the given cell override protected def neighbours(cell : Tuple2[Int, Int]) : Set[Tuple2[Int, Int]] = super.neighbours(cell) filter { cell => cell._1 >= 0 && cell._1 < boardWidth && cell._2 >= 0 && cell._2 < boardHeight } // Information on where the currently live cells are override protected def xRange = 0 until boardWidth override protected def yRange = 0 until boardHeight // Equality stuff override def equals(other : Any) : Boolean = { other match { case that : FiniteGameOfLifeBoard => (that canEqual this) && that.boardWidth == this.boardWidth && that.boardHeight == this.boardHeight && that.aliveCells == this.aliveCells case _ => false } } override def canEqual(other : Any) : Boolean = other.isInstanceOf[FiniteGameOfLifeBoard] override def hashCode : Int = { 41 * ( 41 * ( 41 + super.hashCode ) + boardHeight.hashCode ) + boardWidth.hashCode } } class GameOfLife(initialBoard: GameOfLifeBoard) { // Run the game of life until the board is empty or the exact same board is seen twice // Important note: this method does NOT necessarily terminate!! def go : Unit = { var currentBoard = initialBoard var previousBoards = List[GameOfLifeBoard]() while (!currentBoard.empty && !(previousBoards contains currentBoard)) { print(27.toChar + "[2J") // ANSI: clear screen print(27.toChar + "[;H") // ANSI: move cursor to top left corner of screen println(currentBoard.toString) Thread.sleep(75) // Warning: unbounded list concatenation can result in OutOfMemoryExceptions ####TODO: replace with LRU bounded list previousBoards = List(currentBoard) ::: previousBoards currentBoard = currentBoard tick } // Print the final board print(27.toChar + "[2J") // ANSI: clear screen print(27.toChar + "[;H") // ANSI: move cursor to top left corner of screen println(currentBoard.toString) } } // Script starts here val simple = Set((1,1)) val square = Set((4,4), (4,5), (5,4), (5,5)) val glider = Set((2,1), (3,2), (1,3), (2,3), (3,3)) val initialBoard = glider (new GameOfLife(new FiniteGameOfLifeBoard(20, 20, initialBoard))).go //(new GameOfLife(new InfiniteGameOfLifeBoard(initialBoard))).go // COPYRIGHT PETER MONKS 2010 Thanks! Peter

    Read the article

  • Error with Phoenix placeholder _val in Boost.Spirit.Lex :(

    - by GooRoo
    Hello, everybody. I'm newbie in Boost.Spirit.Lex. Some strange error appears every time I try to use lex::_val in semantics actions in my simple lexer: #ifndef _TOKENS_H_ #define _TOKENS_H_ #include <iostream> #include <string> #include <boost/spirit/include/lex_lexertl.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_statement.hpp> #include <boost/spirit/include/phoenix_container.hpp> namespace lex = boost::spirit::lex; namespace phx = boost::phoenix; enum tokenids { ID_IDENTIFICATOR = 1, ID_CONSTANT, ID_OPERATION, ID_BRACKET, ID_WHITESPACES }; template <typename Lexer> struct mega_tokens : lex::lexer<Lexer> { mega_tokens() : identifier(L"[a-zA-Z_][a-zA-Z0-9_]*", ID_IDENTIFICATOR) , constant (L"[0-9]+(\\.[0-9]+)?", ID_CONSTANT ) , operation (L"[\\+\\-\\*/]", ID_OPERATION ) , bracket (L"[\\(\\)\\[\\]]", ID_BRACKET ) { using lex::_tokenid; using lex::_val; using phx::val; this->self = operation [ std::wcout << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] | identifier [ std::wcout << val(L'<') << _tokenid << val(L':') << _val << val(L'>') ] | constant [ std::wcout << val(L'<') << _tokenid // << val(L':') << _val << val(L'>') ] | bracket [ std::wcout << phx::val(lex::_val) << val(L'<') << _tokenid // << val(L':') << lex::_val << val(L'>') ] ; } lex::token_def<wchar_t, wchar_t> operation; lex::token_def<std::wstring, wchar_t> identifier; lex::token_def<double, wchar_t> constant; lex::token_def<wchar_t, wchar_t> bracket; }; #endif // _TOKENS_H_ and #include <cstdlib> #include <iostream> #include <locale> #include <boost/spirit/include/lex_lexertl.hpp> #include "tokens.h" int main() { setlocale(LC_ALL, "Russian"); namespace lex = boost::spirit::lex; typedef std::wstring::iterator base_iterator; typedef lex::lexertl::token < base_iterator, boost::mpl::vector<wchar_t, std::wstring, double, wchar_t>, boost::mpl::true_ > token_type; typedef lex::lexertl::actor_lexer<token_type> lexer_type; typedef mega_tokens<lexer_type>::iterator_type iterator_type; mega_tokens<lexer_type> mega_lexer; std::wstring str = L"alfa+x1*(2.836-x2[i])"; base_iterator first = str.begin(); bool r = lex::tokenize(first, str.end(), mega_lexer); if (r) { std::wcout << L"Success" << std::endl; } else { std::wstring rest(first, str.end()); std::wcerr << L"Lexical analysis failed\n" << L"stopped at: \"" << rest << L"\"\n"; } return EXIT_SUCCESS; } This code causes an error in Boost header 'boost/spirit/home/lex/argument.hpp' on line 167 while compiling: return: can't convert 'const boost::variant' to 'boost::variant &' When I don't use lex::_val program compiles with no errors. Obviously, I use _val in wrong way, but I do not know how to do this correctly. Help, please! :) P.S. And sorry for my terrible English…

    Read the article

  • errors deploying an EAR with EJBs 2.1 into JBoss AS5

    - by Marina
    Hi, I'm porting an application with EJBs 2.1 from Weblogic9 to JBoss AS5. I have made some of the changes like adding jboss.xml descriptors to EJBs and fixing application.xml of the EAR, but there are still problems when deploying the EAR. Here is a summary of the the latest error I'm getting when the first EJB is being deployed by JBoss (I will add the full stack trace at the end of the message): 14:15:48,124 ERROR [AbstractKernelController] Error installing to Parse: name=vf sfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/ state =Not Installed mode=Manual requiredState=Parse org.jboss.deployers.spi.DeploymentException: Error creating managed object for v fsfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/admin -ejb.jar/ .... Caused by: org.jboss.xb.binding.JBossXBException: Failed to parse source: Failed to parse schema for nsURI=, baseURI=null, schemaLocation=http://www.jboss.org/j2ee/dtd/jboss_2_4.dtd .... Caused by: org.jboss.xb.binding.JBossXBRuntimeException: -1:-1 94:3 The markup in the document preceding the root element must be well-formed. Is this a problem with parsing the jboss_2_4.dtd itself? or is it something worng with my descriptors for the EJB? When I try to validate the jboss_2_4.dtd in an XML editor it does complain about a syntax error at line 94:1 , which is the beginning of the first declaration, although it looks fine. Any ideas? Thanks! Marina Full error stack trace: 14:15:48,124 ERROR [AbstractKernelController] Error installing to Parse: name=vf sfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/ state =Not Installed mode=Manual requiredState=Parse org.jboss.deployers.spi.DeploymentException: Error creating managed object for v fsfile:/C:/Marina/Tools/jboss-5.1.0.GA/server/default/deploy/contracts.ear/admin -ejb.jar/ at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentExcept ion(DeploymentException.java:49) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:362) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:322) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:294) at org.jboss.deployment.JBossEjbParsingDeployer.createMetaData(JBossEjbP arsingDeployer.java:95) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.deploy(AbstractParsingDeployerWithOutput.java:234) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(Deployer Wrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(Deployer sImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFi rst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFi rst(DeployersImpl.java:1210) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(Deployers Impl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(Abstra ctControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractContr oller.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(Abstra ctController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(Deployers Impl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeploye rImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter .process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction .install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileActi on.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileServ ice.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(Abstra ctControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractContr oller.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(Abstra ctController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(Abstr actController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractContro ller.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileServ ice.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start( ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java: 461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) Caused by: org.jboss.xb.binding.JBossXBException: Failed to parse source: Failed to parse schema for nsURI=, baseURI=null, schemaLocation=http://www.jboss.org/j 2ee/dtd/jboss_2_4.dtd at org.jboss.xb.binding.parser.sax.SaxJBossXBParser.parse(SaxJBossXBPars er.java:203) at org.jboss.xb.binding.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java :168) at org.jboss.xb.util.JBossXBHelper.parse(JBossXBHelper.java:189) at org.jboss.xb.util.JBossXBHelper.parse(JBossXBHelper.java:166) at org.jboss.deployers.vfs.spi.deployer.SchemaResolverDeployer.parse(Sch emaResolverDeployer.java:137) at org.jboss.deployers.vfs.spi.deployer.SchemaResolverDeployer.parse(Sch emaResolverDeployer.java:121) at org.jboss.deployers.vfs.spi.deployer.AbstractVFSParsingDeployer.parse AndInit(AbstractVFSParsingDeployer.java:256) at org.jboss.deployers.vfs.spi.deployer.AbstractVFSParsingDeployer.parse (AbstractVFSParsingDeployer.java:188) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithO utput.createMetaData(AbstractParsingDeployerWithOutput.java:348) ... 35 more Caused by: org.jboss.xb.binding.JBossXBRuntimeException: Failed to parse schema for nsURI=, baseURI=null, schemaLocation=http://www.jboss.org/j2ee/dtd/jboss_2_4 .dtd at org.jboss.xb.binding.resolver.AbstractMutableSchemaResolver.resolve(A bstractMutableSchemaResolver.java:293) at org.jboss.xb.binding.sunday.unmarshalling.SundayContentHandler.startE lement(SundayContentHandler.java:274) at org.jboss.xb.binding.parser.sax.SaxJBossXBParser$DelegatingContentHan dler.startElement(SaxJBossXBParser.java:401) at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Sour ce) at org.apache.xerces.xinclude.XIncludeHandler.startElement(Unknown Sourc e) at org.apache.xerces.impl.dtd.XMLDTDValidator.startElement(Unknown Sourc e) at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unkn own Source) at org.apache.xerces.impl.XMLNSDocumentScannerImpl$NSContentDispatcher.s canRootElementHook(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContent Dispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Un known Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Sour ce) at org.jboss.xb.binding.parser.sax.SaxJBossXBParser.parse(SaxJBossXBPars er.java:199) ... 43 more Caused by: org.jboss.xb.binding.JBossXBRuntimeException: -1:-1 94:3 The markup i n the document preceding the root element must be well-formed. at org.jboss.xb.binding.sunday.unmarshalling.XsdBinderTerminatingErrorHa ndler.handleError(XsdBinderTerminatingErrorHandler.java:40) at org.apache.xerces.impl.xs.XMLSchemaLoader.reportDOMFatalError(Unknown Source) at org.apache.xerces.impl.xs.XSLoaderImpl.load(Unknown Source) at org.jboss.xb.binding.Util.loadSchema(Util.java:395) at org.jboss.xb.binding.sunday.unmarshalling.XsdBinder.bind(XsdBinder.ja va:176) at org.jboss.xb.binding.sunday.unmarshalling.XsdBinder.bind(XsdBinder.ja va:147) at org.jboss.xb.binding.resolver.AbstractMutableSchemaResolver.resolve(A bstractMutableSchemaResolver.java:285) ... 58 more

    Read the article

  • How to get rid of previous reflection when reflecting a UIImageView (with changing pictures)?

    - by epale
    Hi everyone, I have managed to use the reflection sample app from apple to create a reflection from a UIImageView. But the problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection. How do I ensure that the previous reflection is removed when I change to the next picture? Thank you so much. I hope my question is not too basic. Here is the codes which i used so far: //reflection self.view.autoresizesSubviews = YES; self.view.userInteractionEnabled = YES; // create the reflection view CGRect reflectionRect = currentView.frame; // the reflection is a fraction of the size of the view being reflected reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction; // and is offset to be at the bottom of the view being reflected reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height); reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect]; // determine the size of the reflection to create NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction; // create the reflection image, assign it to the UIImageView and add the image view to the containerView reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight]; reflectionView.alpha = kDefaultReflectionOpacity; [self.view addSubview:reflectionView]; //reflection */ Then the codes below are used to form the reflection: CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh) { CGImageRef theCGImage = NULL; // gradient is always black-white and the mask must be in the gray colorspace CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // create the bitmap context CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, kCGImageAlphaNone); // define the start and end grayscale values (with the alpha, even though // our bitmap context doesn't support alpha the gradient requires it) CGFloat colors[] = {0.0, 1.0, 1.0, 1.0}; // create the CGGradient and then release the gray color space CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2); CGColorSpaceRelease(colorSpace); // create the start and end points for the gradient vector (straight down) CGPoint gradientStartPoint = CGPointZero; CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh); // draw the gradient into the gray bitmap context CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint, gradientEndPoint, kCGGradientDrawsAfterEndLocation); CGGradientRelease(grayScaleGradient); // convert the context into a CGImageRef and release the context theCGImage = CGBitmapContextCreateImage(gradientBitmapContext); CGContextRelease(gradientBitmapContext); // return the imageref containing the gradient return theCGImage; } CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // create the bitmap context CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, // this will give us an optimal BGRA format for the device: (kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst)); CGColorSpaceRelease(colorSpace); return bitmapContext; } (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height { if (!height) return nil; // create a bitmap graphics context the size of the image CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height); // offset the context - // This is necessary because, by default, the layer created by a view for caching its content is flipped. // But when you actually access the layer content and have it rendered it is inverted. Since we're only creating // a context the size of our reflection view (a fraction of the size of the main view) we have to translate the // context the delta in size, and render it. // CGFloat translateVertical= fromImage.bounds.size.height - height; CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical); // render the layer into the bitmap context CALayer *layer = fromImage.layer; [layer renderInContext:mainViewContentContext]; // create CGImageRef of the main view bitmap content, and then release that bitmap context CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); // create a 2 bit CGImage containing a gradient that will be used for masking the // main view content to create the 'fade' of the reflection. The CGImageCreateWithMask // function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient CGImageRef gradientMaskImage = CreateGradientImage(1, height); // create an image by masking the bitmap of the mainView content with the gradient view // then release the pre-masked content bitmap and the gradient bitmap CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage); CGImageRelease(mainViewContentBitmapContext); CGImageRelease(gradientMaskImage); // convert the finished reflection image to a UIImage UIImage *theImage = [UIImage imageWithCGImage:reflectionImage]; // image is retained by the property setting above, so we can release the original CGImageRelease(reflectionImage); return theImage; } */

    Read the article

  • Paging over a lazy-loaded collection with NHibernate

    - by HackedByChinese
    I read this article where Ayende states NHibernate can (compared to EF 4): Collection with lazy=”extra” – Lazy extra means that NHibernate adapts to the operations that you might run on top of your collections. That means that blog.Posts.Count will not force a load of the entire collection, but rather would create a “select count(*) from Posts where BlogId = 1” statement, and that blog.Posts.Contains() will likewise result in a single query rather than paying the price of loading the entire collection to memory. Collection filters and paged collections - this allows you to define additional filters (including paging!) on top of your entities collections, which means that you can easily page through the blog.Posts collection, and not have to load the entire thing into memory. So I decided to put together a test case. I created the cliché Blog model as a simple demonstration, with two classes as follows: public class Blog { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual ICollection<Post> Posts { get; private set; } public virtual void AddPost(Post item) { if (Posts == null) Posts = new List<Post>(); if (!Posts.Contains(item)) Posts.Add(item); } } public class Post { public virtual int Id { get; private set; } public virtual string Title { get; set; } public virtual string Body { get; set; } public virtual Blog Blog { get; private set; } } My mappings files look like this: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Blogs"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Name" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Name" /> </property> <property name="Type" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Type" /> </property> <bag lazy="extra" name="Posts"> <key> <column name="Blog_Id" /> </key> <one-to-many class="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Posts"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Title" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Title" /> </property> <property name="Body" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Body" /> </property> <many-to-one class="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Blog"> <column name="Blog_id" /> </many-to-one> </class> </hibernate-mapping> My test case looks something like this: using (ISession session = Configuration.Current.CreateSession()) // this class returns a custom ISession that represents either EF4 or NHibernate { blogs = (from b in session.Linq<Blog>() where b.Name.Contains("Test") orderby b.Id select b); Console.WriteLine("# of Blogs containing 'Test': {0}", blogs.Count()); Console.WriteLine("Viewing the first 5 matching Blogs."); foreach (Blog b in blogs.Skip(0).Take(5)) { Console.WriteLine("Blog #{0} \"{1}\" has {2} Posts.", b.Id, b.Name, b.Posts.Count); Console.WriteLine("Viewing first 5 matching Posts."); foreach (Post p in b.Posts.Skip(0).Take(5)) { Console.WriteLine("Post #{0} \"{1}\" \"{2}\"", p.Id, p.Title, p.Body); } } } Using lazy="extra", the call to b.Posts.Count does do a SELECT COUNT(Id)... which is great. However, b.Posts.Skip(0).Take(5) just grabs all Posts for Blog.Id = ?id, and then LINQ on the application side is just taking the first 5 from the resulting collection. What gives?

    Read the article

  • Is it possible to shorten my main function in this code?

    - by AjiPorter
    Is it possible for me to shorten my main() by creating a class? If so, what part of my code would most likely be inside the class? Thanks again to those who'll answer. :) #include <iostream> #include <fstream> #include <string> #include <ctime> #include <cstdlib> #define SIZE 20 using namespace std; struct textFile { string word; struct textFile *next; }; textFile *head, *body, *tail, *temp; int main() { ifstream wordFile("WORDS.txt", ios::in); // file object constructor /* stores words in the file into an array */ string words[SIZE]; char pointer; int i; for(i = 0; i < SIZE; i++) { while(wordFile >> pointer) { if(!isalpha(pointer)) { pointer++; break; } words[i] = words[i] + pointer; } } /* stores the words in the array to a randomized linked list */ srand(time(NULL)); int index[SIZE] = {0}; // temporary array of index that will contain randomized indexes of array words int j = 0, ctr; // assigns indexes to array index while(j < SIZE) { i = rand() % SIZE; ctr = 0; for(int k = 0; k < SIZE; k++) { if(!i) break; else if(i == index[k]) { // checks if the random number has previously been stored as index ctr = 1; break; } } if(!ctr) { index[j] = i; // assigns the random number to the current index of array index j++; } } /* makes sure that there are no double zeros on the array */ ctr = 0; for(i = 0; i < SIZE; i++) { if(!index[i]) ctr++; } if(ctr > 1) { int temp[ctr-1]; for(j = 0; j < ctr-1; j++) { for(i = 0; i < SIZE; i++) { if(!index[i]) { int ctr2 = 0; for(int k = 0; k < ctr-1; k++) { if(i == temp[k]) ctr2 = 1; } if(!ctr2) temp[j] = i; } } } j = ctr - 1; while(j > 0) { i = rand() % SIZE; ctr = 0; for(int k = 0; k < SIZE; k++) { if(!i || i == index[k]) { ctr = 1; break; } } if(!ctr) { index[temp[j-1]] = i; j--; } } } head = tail = body = temp = NULL; for(j = 0; j < SIZE; j++) { body = (textFile*) malloc (sizeof(textFile)); body->word = words[index[j]]; if(head == NULL) { head = tail = body; } else { tail->next = body; tail = body; cout << tail->word << endl; } } temp = head; while(temp != NULL) { cout << temp->word << endl; temp = temp->next; } return 0; }

    Read the article

  • How to merge two different Makefiles?

    - by martijnn2008
    I have did some reading on "Merging Makefiles", one suggest I should leave the two Makefiles separate in different folders [1]. For me this look counter intuitive, because I have the following situation: I have 3 source files (main.cpp flexibility.cpp constraints.cpp) one of them (flexibility.cpp) is making use of the COIN-OR Linear Programming library (Clp) When installing this library on my computer it makes sample Makefiles, which I have adjust the Makefile and it currently makes a good working binary. # Copyright (C) 2006 International Business Machines and others. # All Rights Reserved. # This file is distributed under the Eclipse Public License. # $Id: Makefile.in 726 2006-04-17 04:16:00Z andreasw $ ########################################################################## # You can modify this example makefile to fit for your own program. # # Usually, you only need to change the five CHANGEME entries below. # ########################################################################## # To compile other examples, either changed the following line, or # add the argument DRIVER=problem_name to make DRIVER = main # CHANGEME: This should be the name of your executable EXE = clp # CHANGEME: Here is the name of all object files corresponding to the source # code that you wrote in order to define the problem statement OBJS = $(DRIVER).o constraints.o flexibility.o # CHANGEME: Additional libraries ADDLIBS = # CHANGEME: Additional flags for compilation (e.g., include flags) ADDINCFLAGS = # CHANGEME: Directory to the sources for the (example) problem definition # files SRCDIR = . ########################################################################## # Usually, you don't have to change anything below. Note that if you # # change certain compiler options, you might have to recompile the # # COIN package. # ########################################################################## COIN_HAS_PKGCONFIG = TRUE COIN_CXX_IS_CL = #TRUE COIN_HAS_SAMPLE = TRUE COIN_HAS_NETLIB = #TRUE # C++ Compiler command CXX = g++ # C++ Compiler options CXXFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # additional C++ Compiler options for linking CXXLINKFLAGS = -Wl,--rpath -Wl,/home/martijn/Downloads/COIN/coin-Clp/lib # C Compiler command CC = gcc # C Compiler options CFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wimplicit -Wparentheses -Wsequence-point -Wreturn-type -Wcast-qual -Wall -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # Sample data directory ifeq ($(COIN_HAS_SAMPLE), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" CFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" else CXXFLAGS += -DSAMPLEDIR=\"\" CFLAGS += -DSAMPLEDIR=\"\" endif endif # Netlib data directory ifeq ($(COIN_HAS_NETLIB), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" CFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" else CXXFLAGS += -DNETLIBDIR=\"\" CFLAGS += -DNETLIBDIR=\"\" endif endif # Include directories (we use the CYGPATH_W variables to allow compilation with Windows compilers) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) INCL = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --cflags clp` else INCL = endif INCL += $(ADDINCFLAGS) # Linker flags ifeq ($(COIN_HAS_PKGCONFIG), TRUE) LIBS = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --libs clp` else ifeq ($(COIN_CXX_IS_CL), TRUE) LIBS = -link -libpath:`$(CYGPATH_W) /home/martijn/Downloads/COIN/coin-Clp/lib` libClp.lib else LIBS = -L/home/martijn/Downloads/COIN/coin-Clp/lib -lClp endif endif # The following is necessary under cygwin, if native compilers are used CYGPATH_W = echo # Here we list all possible generated objects or executables to delete them CLEANFILES = clp \ main.o \ flexibility.o \ constraints.o \ all: $(EXE) .SUFFIXES: .cpp .c .o .obj $(EXE): $(OBJS) bla=;\ for file in $(OBJS); do bla="$$bla `$(CYGPATH_W) $$file`"; done; \ $(CXX) $(CXXLINKFLAGS) $(CXXFLAGS) -o $@ $$bla $(LIBS) $(ADDLIBS) clean: rm -rf $(CLEANFILES) .cpp.o: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .cpp.obj: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` .c.o: $(CC) $(CFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .c.obj: $(CC) $(CFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` The other Makefile compiles a lot of code and makes use of bison and flex. This one is also made by someone else. I am able to alter this Makefile when I want to add some code. This Makefile also makes a binary. CFLAGS=-Wall LDLIBS=-LC:/GnuWin32/lib -lfl -lm LSOURCES=lex.l YSOURCES=grammar.ypp CSOURCES=debug.cpp esta_plus.cpp heap.cpp main.cpp stjn.cpp timing.cpp tmsp.cpp token.cpp chaining.cpp flexibility.cpp exceptions.cpp HSOURCES=$(CSOURCES:.cpp=.h) includes.h OBJECTS=$(LSOURCES:.l=.o) $(YSOURCES:.ypp=.tab.o) $(CSOURCES:.cpp=.o) all: solver solver: CFLAGS+=-g -O0 -DDEBUG solver: $(OBJECTS) main.o debug.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) solver.release: CFLAGS+=-O5 solver.release: $(OBJECTS) main.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) %.o: %.cpp g++ -c $(CFLAGS) -o $@ $< lex.cpp: lex.l grammar.tab.cpp grammar.tab.hpp flex -o$@ $< %.tab.cpp %.tab.hpp: %.ypp bison --verbose -d $< ifneq ($(LSOURCES),) $(LSOURCES:.l=.cpp): $(YSOURCES:.y=.tab.h) endif -include $(OBJECTS:.o=.d) clean: rm -f $(OBJECTS) $(OBJECTS:.o=.d) $(YSOURCES:.ypp=.tab.cpp) $(YSOURCES:.ypp=.tab.hpp) $(YSOURCES:.ypp=.output) $(LSOURCES:.l=.cpp) solver solver.release 2>/dev/null .PHONY: all clean debug release Both of these Makefiles are, for me, hard to understand. I don't know what they exactly do. What I want is to merge the two of them so I get only one binary. The code compiled in the second Makefile should be the result. I want to add flexibility.cpp and constraints.cpp to the second Makefile, but when I do. I get the problem following problem: flexibility.h:4:26: fatal error: ClpSimplex.hpp: No such file or directory #include "ClpSimplex.hpp" So the compiler can't find the Clp library. I also tried to copy-paste more code from the first Makefile into the second, but it still gives me that same error. Q: Can you please help me with merging the two makefiles or pointing out a more elegant way? Q: In this case is it indeed better to merge the two Makefiles? I also tried to use cmake, but I gave upon that one quickly, because I don't know much about flex and bison.

    Read the article

  • Prefer extension methods for encapsulation and reusability?

    - by tzaman
    edit4: wikified, since this seems to have morphed more into a discussion than a specific question. In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. (edit: Even in the current .NET library, I can see places where it would've been useful to have extensions instead of instance methods - for example, all of the utility functions of List<T> (Sort, BinarySearch, FindIndex, etc.) would be incredibly useful if they were lifted up to IList<T> - getting free bonus functionality like that adds a lot more benefit to implementing the interface.) So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself? (edit2: In response to Tomas - while C# did start out with Java's (overly, imo) OO mentality, it seems to be embracing more multi-paradigm programming with every new release; the main thrust of this question is whether using extension methods to drive a style change (towards more generic / functional C#) is useful or worthwhile..) edit3: overridable extension methods The only real problem identified so far with this approach, is that you can't specialize extension methods if you need to. I've been thinking about the issue, and I think I've come up with a solution. Suppose I have an interface MyInterface, which I want to extend - I define my extension methods in a MyExtension static class, and pair it with another interface, call it MyExtensionOverrider. MyExtension methods are defined according to this pattern: public static int MyMethod(this MyInterface obj, int arg, bool attemptCast=true) { if (attemptCast && obj is MyExtensionOverrider) { return ((MyExtensionOverrider)obj).MyMethod(arg); } // regular implementation here } The override interface mirrors all of the methods defined in MyExtension, except without the this or attemptCast parameters: public interface MyExtensionOverrider { int MyMethod(int arg); string MyOtherMethod(); } Now, any class can implement the interface and get the default extension functionality: public class MyClass : MyInterface { ... } Anyone that wants to override it with specific implementations can additionally implement the override interface: public class MySpecializedClass : MyInterface, MyExtensionOverrider { public int MyMethod(int arg) { //specialized implementation for one method } public string MyOtherMethod() { // fallback to default for others MyExtension.MyOtherMethod(this, attemptCast: false); } } And there we go: extension methods provided on an interface, with the option of complete extensibility if needed. Fully general too, the interface itself doesn't need to know about the extension / override, and multiple extension / override pairs can be implemented without interfering with each other. I can see three problems with this approach - It's a little bit fragile - the extension methods and override interface have to be kept synchronized manually. It's a little bit ugly - implementing the override interface involves boilerplate for every function you don't want to specialize. It's a little bit slow - there's an extra bool comparison and cast attempt added to the mainline of every method. Still, all those notwithstanding, I think this is the best we can get until there's language support for interface functions. Thoughts?

    Read the article

  • Drupal: Create custom search

    - by Dr. Hfuhruhurr
    I'm trying to create a custom search but getting stuck. What I want is to have a dropdownbox so the user can choose where to search in. These options can mean 1 or more content types. So if he chooses options A, then the search will look in node-type P,Q,R. But he may not give those results, but only the uid's which will be then themed to gather specific data for that user. To make it a little bit clearer, Suppose I want to llok for people. The what I'm searching in is 2 content profile types. But ofcourse you dont want to display those as a result, but a nice picture of the user and some data. I started with creating a form with a textfield and the dropdown box. Then, in the submit handler, i created the keys and redirected to another pages with those keys as a tail. This page has been defined in the menu hook, just like how search does it. After that I want to call hook_view to do the actual search by calling node_search, and give back the results. Unfortunately, it goes wrong. When i click the Search button, it gives me a 404. But am I on the right track? Is this the way to create a custom search? Thx for your help. Here's the code for some clarity: <?php // $Id$ /* * @file * Searches on Project, Person, Portfolio or Group. */ /** * returns an array of menu items * @return array of menu items */ function vm_search_menu() { $subjects = _vm_search_get_subjects(); foreach ($subjects as $name => $description) { $items['zoek/'. $name .'/%menu_tail'] = array( 'page callback' => 'vm_search_view', 'page arguments' => array($name), 'type' => MENU_LOCAL_TASK, ); } return $items; } /** * create a block to put the form into. * @param $op * @param $delta * @param $edit * @return mixed */ function vm_search_block($op = 'list', $delta = 0, $edit = array()) { switch ($op) { case 'list': $blocks[0]['info'] = t('Algemene zoek'); return $blocks; case 'view': if (0 == $delta) { $block['subject'] = t(''); $block['content'] = drupal_get_form('vm_search_general_form'); } return $block; } } /** * Define the form. */ function vm_search_general_form() { $subjects = _vm_search_get_subjects(); foreach ($subjects as $key => $subject) { $options[$key] = $subject['desc']; } $form['subjects'] = array( '#type' => 'select', '#options' => $options, '#required' => TRUE, ); $form['keys'] = array( '#type' => 'textfield', '#required' => TRUE, ); $form['submit'] = array( '#type' => 'submit', '#value' => t('Zoek'), ); return $form; } function vm_search_general_form_submit($form, &$form_state) { $subjects = _vm_search_get_subjects(); $keys = $form_state['values']['keys']; //the search keys //the content types to search in $keys .= ' type:' . implode(',', $subjects[$form_state['values']['subjects']]['types']); //redirect to the page, where vm_search_view will handle the actual search $form_state['redirect'] = 'zoek/'. $form_state['values']['subjects'] .'/'. $keys; } /** * Menu callback; presents the search results. */ function vm_search_view($type = 'node') { // Search form submits with POST but redirects to GET. This way we can keep // the search query URL clean as a whistle: // search/type/keyword+keyword if (!isset($_POST['form_id'])) { if ($type == '') { // Note: search/node can not be a default tab because it would take on the // path of its parent (search). It would prevent remembering keywords when // switching tabs. This is why we drupal_goto to it from the parent instead. drupal_goto($front_page); } $keys = search_get_keys(); // Only perform search if there is non-whitespace search term: $results = ''; if (trim($keys)) { // Log the search keys: watchdog('vm_search', '%keys (@type).', array('%keys' => $keys, '@type' => $type)); // Collect the search results: $results = node_search('search', $type); if ($results) { $results = theme('box', t('Zoek resultaten'), $results); } else { $results = theme('box', t('Je zoek heeft geen resultaten opgeleverd.')); } } } return $results; } /** * returns array where to look for * @return array */ function _vm_search_get_subjects() { $subjects['opdracht'] = array('desc' => t('Opdracht'), 'types' => array('project') ); $subjects['persoon'] = array('desc' => t('Persoon'), 'types' => array('types_specialisatie', 'smaak_en_interesses') ); $subjects['groep'] = array('desc' => t('Groep'), 'types' => array('Villamedia_groep') ); $subjects['portfolio'] = array('desc' => t('Portfolio'), 'types' => array('artikel') ); return $subjects; }

    Read the article

  • Build Environment setup - Using .net, java, hudson, and ruby - Could really use a critique

    - by Jeff D
    I'm trying to figure out the best way to stitch together a fast, repeatable, unbreakable build process for the following environment. I've got a plan for how to do it, but I'd really appreciate a critique. (I'd also appreciate some sample code, but more on that later) Ecosystem - Logical: Website - asp.net MVC 2, .net 3.5, Visual Studio 2010. IIS 6, Facebook iframe application application. This website/facebook app uses a few services. An internal search api, an internal read/write api, facebook, and an IP geolocation service. More details on these below Internal search api - .net, restful, built using old school .ashx handlers. The api uses lucene, and a sql server database behind the scenes. My project won't touch the lucene code, but does potentially touch the database and the web services. internal read/write api - java, restful, running on Tomcat Facebook web services A mocking site that emulates the internal read/write api, and parts of the facebook api Hudson - Runs unit tests on checkin, and creates some installers that behave inconsistently. Ecosystem - Physical: All of these machines can talk to one another, except for Hudson. Hudson can't see any of the target machines. So code must be pulled, rather than pushed. (Security thing) 1. Web Server - Holds the website, and the read/write api. (The api itself writes to a replicated sql server environment). 2. Search Server - Houses the search api. 3. Hudson Server - Does not have permissions to push to any environment. They have to pull. 4. Lucene Server 5. Database Server Problem I've been trying to set this site up to run in a stress environment, but the number of setup steps, the amount of time it takes to update a component, the black-box nature of the current installers, and the time it takes to generate data into the test system is absolutely destroying my productivity. I tweak one setting, have to redeploy, restart in a certain order, resetup some of the settings, and rebuild test data. Errors result in headscratching, and then basically starting over. Very bad. This problem is complicated further by my stress testing. I need to be able to turn on and off different external components, so that I can effectively determine the scalability of each piece. I've got strategies in place for how to do that for each dependency, but it further complicates my setup strategy, because now each component has 2 options. A mock version, or a real version. Configurations everywhere must be updated accordingly. Goals Fast - I want to drop this from a 20 minute exercise when things go perfectly, to a 3 minute one Stupid simple - I want to tell the environment what to do with as few commands as possible, and not have to remember how to stitch the environments together Repeatable - I want the script to be idempotent. Kind of a corollary to the Stupid Simple thing. The Plan So Far Here's what I've come up with so far, and what I've come looking for feedback on: Use VisualStudio's new web.config transformations to permit easily altering configs based on envrionment. This solution isn't really sufficient though. I will leave web.config set up to let the site run locally, but when deploying elsewhere, I have as many as 6 different possible outputs for the stress environment alone (because of the mocks of the various dependencies), let alone the settings for prod, QA, and dev. Each of these would then require it's own setup, or a setup that would then post-process the configs. So I'm currently leaning toward just having the dev version, and a version that converts key configuration values into a ruby string interpolation syntax. ({#VAR_NAME} kinda thing) Create a ruby script for each server that is essentially a bootstrapping script. That is to say, it will do nothing but load the ruby code that does the 'real' work from hudson/subversion, so that the script's functionality can evolve with the application, making it easy to build the site at any point in time by reference the appropriate version of the script. So in a nutshell, this script loads another script, and runs it. The 'real' ruby script will then accept commandline parameters that describe how the environment should look. From there, 1 configuration file can be used, and ruby will download the current installers, run them, post-process the configs, restart IIS/Tomcat, and kick off any data setup code that is needed. So that's it. I'm in a real time crunch to get this site stress-tested, so any feedback that you think could abbreviate the time this might take would be appreciated. That includes a shameless request for sample ruby code. I've not gotten too much further than puts "Hello World". :-) Just guidance would be helpful. Is this something that Rake would be useful for? How would you recommend I write tests for this animal? (I use interfaces and automocking frameworks to mock out things like http requests in .net. With ducktyping, it seems that this might be easier, but I don't know how to tell my code to use a fake duck in test, but a real one in practice) Thanks all. Sorry for such such a long-winded, open-ended question.

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >