Search Results

Search found 8440 results on 338 pages for 'wms implementation'.

Page 270/338 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • How do I get require_login()-like functionality using the new PHP Client Library for Facebook?

    - by cc
    Howdy. I've been tasked with making a Facebook game, but I'm new to Facebook development, so I'm just getting started. Apologies in advance if this is a no-brainer to people. I'm having trouble following all the examples I see on sites, and I keep running into missing pages in the Facebook documentation when I am trying to read up. I think it's because there's a new version of the PHP Client Library for Facebook, and everything I'm finding is referring to the old client. For instance, I see this code in a lot of examples: require 'facebook.php'; $facebook = new Facebook( array( 'appId' => '(id)', 'secret' => '(secret)' ) ); $facebook_account = $facebook->require_login(); ...but there's no "require_login()" in the client library provided in the facebook.php file. From what I can tell, it looks like Facebook has very recently rolled out some new system for development, but I don't see any sample code anywhere to deal with it. The new library comes with an "example.php" file, but it appears to be only for adding "Log in with Facebook" functionality to other sites (what I'm assuming is what they mean by "Facebook Connect" sites), not for just running apps in a Canvas page on Facebook itself. Specifically, what I need to do is let users visit an application page within Facebook, have it bring up the dialog box allowing them to authorize the app, have it show up in their "games" page, and then have it pass me the relevant info about the user so I can start creating the game. But I can't seem to find any tutorials or examples that show how to do this using the new library. Seems like this should be pretty straightforward, but I'm running into roadblocks. Or am I missing something about the PHP client library? Should require_login() be working for me, and there's something broken with my implementation, such as having the wrong client library or something? I downloaded from GitHub yesterday, so I'm pretty sure I have the most recent version of the code I have, but perhaps I'm downloading the wrong "facebook.php" file...?

    Read the article

  • Why does this tooltip appear *below* a transclucent form?

    - by Daniel Stutzbach
    I have an form with an Opacity less then 1.0. I have a tooltip associated with a label on the form. When I hover the mouse over the label, the tooltip shows up under the form instead of over the form. If I leave the Opacity at its default value of 1.0, the tooltip correctly appears over the form. However, my form is obviously no longer translucent. ;-) I'm testing on an XP system with .NET 3.5. If you don't see this problem on your system, let me know what operating system and version of .NET you have. I have tried manually adjusting the position of the ToolTip with SetWindowPos() and creating a ToolTip "by hand" using CreateWindowEx(), but the problem remains. This makes me suspect its a Win32 API problem, not a problem with the Windows Forms implementation that runs on top of Win32. Why does the tooltip appear under the form, and, more importantly, how can I get it to appear over the form where it should? Here is a minimal program to demonstrate the problem: using System; using System.Windows.Forms; public class Form1 : Form { private ToolTip toolTip1; private Label label1; [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } public Form1() { toolTip1 = new ToolTip(); label1 = new Label(); label1.Location = new System.Drawing.Point(105, 127); label1.Text = "Hover over me"; label1.AutoSize = true; toolTip1.SetToolTip(label1, "This is a moderately long string, " + "designed to be very long so that it will also be quite long."); ClientSize = new System.Drawing.Size(292, 268); Controls.Add(label1); Opacity = 0.8; } }

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • How to handle Win+Shift+LEft/Right on Win7 with custom WM_GETMINMAXINFO logic?

    - by Steven Robbins
    I have a custom windows implementation in a WPF app that hooks WM_GETMINMAXINFO as follows: private void MaximiseWithTaskbar(System.IntPtr hwnd, System.IntPtr lParam) { MINMAXINFO mmi = (MINMAXINFO)Marshal.PtrToStructure(lParam, typeof(MINMAXINFO)); System.IntPtr monitor = MonitorFromWindow(hwnd, MONITOR_DEFAULTTONEAREST); if (monitor != System.IntPtr.Zero) { MONITORINFO monitorInfo = new MONITORINFO(); GetMonitorInfo(monitor, monitorInfo); RECT rcWorkArea = monitorInfo.rcWork; RECT rcMonitorArea = monitorInfo.rcMonitor; mmi.ptMaxPosition.x = Math.Abs(rcWorkArea.left - rcMonitorArea.left); mmi.ptMaxPosition.y = Math.Abs(rcWorkArea.top - rcMonitorArea.top); mmi.ptMaxSize.x = Math.Abs(rcWorkArea.right - rcWorkArea.left); mmi.ptMaxSize.y = Math.Abs(rcWorkArea.bottom - rcWorkArea.top); mmi.ptMinTrackSize.x = Convert.ToInt16(this.MinWidth * (desktopDpiX / 96)); mmi.ptMinTrackSize.y = Convert.ToInt16(this.MinHeight * (desktopDpiY / 96)); } Marshal.StructureToPtr(mmi, lParam, true); } It all works a treat and it allows me to have a borderless window maximized without having it sit on to of the task bar, which is great, but it really doesn't like being moved between monitors with the new Win7 keyboard shortcuts. Whenever the app is moved with Win+Shift+Left/Right the WM_GETMINMAXINFO message is received, as I'd expect, but MonitorFromWindow(hwnd, MONITOR_DEFAULTTONEAREST) returns the monitor the application has just been moved FROM, rather than the monitor it is moving TO, so if the monitors are of differing resolutions the window end up the wrong size. I'm not sure if there's something else I can call, other then MonitorFromWindow, or whether there's a "moving monitors" message I can hook prior to WM_GETMINMAXINFO. I'm assuming there is a way to do it because "normal" windows work just fine.

    Read the article

  • GWT + Seam, cannot fetch scoped beans from gwt servlet in seam resource servlet.

    - by David Göransson
    Hello all I am trying to get session and conversation scoped beans to a gwt servlet in the seam resource servlet. I have a conversation scoped bean: @Name ("viewFormCopyAction") @Scope (ScopeType.CONVERSATION) public class ViewFormCopyAction {} and a session scoped bean: @Name ("authenticator") @Scope (ScopeType.SESSION) public class AuthenticatorAction {} There is a RemoteService interface: @RemoteServiceRelativePath ("strokesService") public interface StrokesService extends RemoteService { public Position getPosition (int conversationId); } with corresponding async interface: public interface StrokesServiceAsync extends RemoteService { public void getPosition (int conversationId, AsyncCallback callback); } and implementation: @Name ("com.web.actions.forms.gwt.client.StrokesService") @Scope (ScopeType.EVENT) public class StrokesServiceImpl implements StrokesService { @In Manager manager; @Override @WebRemote public Position getPosition (int conversationId) { manager.switchConversation( "" + conversationId ); ViewFormCopyAction vfca = (ViewFormCopyAction) Component.getInstance( "viewFormCopyAction" ); AuthenticatorAction aa = (AuthenticatorAction) Component.getInstance( "authenticator" ); return null; } } The gwt page is within an IFrame in a regular seam page and the conversationId is propagted with the src attribute of the IFrame. Both bean objects end up with only null values. Can anyone see anything wrong with the code? I know that I could use strings instead of the int, but never mind that at this point.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • F# ref-mutable vars vs object fields

    - by rwallace
    I'm writing a parser in F#, and it needs to be as fast as possible (I'm hoping to parse a 100 MB file in less than a minute). As normal, it uses mutable variables to store the next available character and the next available token (i.e. both the lexer and the parser proper use one unit of lookahead). My current partial implementation uses local variables for these. Since closure variables can't be mutable (anyone know the reason for this?) I've declared them as ref: let rec read file includepath = let c = ref ' ' let k = ref NONE let sb = new StringBuilder() use stream = File.OpenText file let readc() = c := stream.Read() |> char // etc I assume this has some overhead (not much, I know, but I'm trying for maximum speed here), and it's a little inelegant. The most obvious alternative would be to create a parser class object and have the mutable variables be fields in it. Does anyone know which is likely to be faster? Is there any consensus on which is considered better/more idiomatic style? Is there another option I'm missing?

    Read the article

  • importing symbols from python package into caller's namespace

    - by Paul C
    I have a little internal DSL written in a single Python file that has grown to a point where I would like to split the contents across a number of different directories + files. The new directory structure currently looks like this: dsl/ __init__.py types/ __init__.py type1.py type2.py and each type file contains a class (e.g. Type1). My problem is that I would like to keep the implementation of code that uses this DSL as simple as possible, something like: import dsl x = Type1() ... This means that all of the important symbols should be available directly in the user's namespace. I have tried updating the top-level __init__.py file to import the relevant symbols: from types.type1 import Type1 from types.type2 import Type2 ... print globals() the output shows that the symbols are imported correctly, but they still aren't present in the caller's code (the code that's doing the import dsl). I think that the problem is that the symbols are actually being imported to the 'dsl' namespace. How can I change this so that the classes are also directly available in the caller's namespace?

    Read the article

  • Deleting elements from stl set while iterating through it does not invalidate the iterators.

    - by pedromanoel
    I need to go through a set and remove elements that meet a predefined criteria. This is the test code I wrote: #include <set> #include <algorithm> void printElement(int value) { std::cout << value << " "; } int main() { int initNum[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; std::set<int> numbers(initNum, initNum + 10); // print '0 1 2 3 4 5 6 7 8 9' std::for_each(numbers.begin(), numbers.end(), printElement); std::set<int>::iterator it = numbers.begin(); // iterate through the set and erase all even numbers for (; it != numbers.end(); ++it) { int n = *it; if (n % 2 == 0) { // wouldn't invalidate the iterator? numbers.erase(it); } } // print '1 3 5 7 9' std::for_each(numbers.begin(), numbers.end(), printElement); return 0; } At first, I thought that erasing an element from the set while iterating through it would invalidate the iterator, and the increment at the for loop would have undefined behavior. Even though, I executed this test code and all went well, and I can't explain why. My question: Is this the defined behavior for std sets or is this implementation specific? I am using gcc 4.3.3 on ubuntu 10.04 (32-bit version), by the way. Thanks!

    Read the article

  • Added CAGradientLayer, getting this in my UIView dealloc: [CALayer release]: message sent to deallocated instance

    - by developerdoug
    Here there, I have a custom UIView. This view acts as a activity indicator but as label above the UIActivityIndicatorView. In the init, I add a CAGradientLayer. I allocate and initialize it and insert it at index 0 as a sublayer of the UIView layer property. In my dealloc method was called, I received a message in the console: - [CALayer release]: message sent to deallocated instance. My code: @interface LabelActivityIndicatorView () { UILabel *_label; UIActivityIndicatorView *_activityIndicatorView; CAGradientLayer *_gradientLayer; } @end @implementation LabelActivityIndicatorView //dealloc - (void) dealloc { [_label release]; [_activityIndicatorView release]; //even tried to remove the layer [_gradientLayer removeFromSuperLayer]; [_gradientLayer release]; [super dealloc]; } // init - (id) initWithFrame:(CGRect)frame { if ( (self = [super initWithFrame:frame]) ) { // init the label // init the gradient layer _gradientLayer = [[CAGradientLayer alloc] init]; [_gradientLayer setBounds:[self bounds]]; [_gradientLayer setPosition:CGPointMake(frame.size.width/2, frame.size.height/2)]; [[self layer] insertSublayer:_gradientLayer atIndex:0]; [[self layer] setNeedsDisplay]; } return self; } @end Anyone have any ideas. Since I'm allocating and initializing the gradient layer I'm responsible for releasing it. I should be able to alloc and init and assign to some ivar. Perhaps I should create a property with retain on it. Thanks,

    Read the article

  • Esper generating episodes

    - by Jasonojh
    I would like to use Esper to generate episodes of events. I am trying to detect the changes in robot movement during each time period and was wondering what would be the best way of implementation. The rules for generation of episodes from the events would be If the new event time(eg. 7sec, robot A) of a robot is more than 3 sec than the latest event(eg. 3 sec, robot A) of the same robot, the new event belongs to a new episode. Each episode should represent only one robot (eg. 2sec, robotA and 3sec, robotB should output 2 episodes) Input data: Event Time Robot Position 1 1 A 0 2 2 A 1 3 6 A 2 Output data should be: Array[0]={Event 1,Event 2} Array[1]={Event 3} //more than 3 sec Input data: Event Time Robot Position 1 1 A 0 2 2 A 1 3 4 B 0 4 6 A 2 Output data should be: Array[0]={Event 1,Event 2} Array[1]={Event 3} //different robot Array[2]={Event 4} Please help provide suggestions. I have tried using mulitple listeners, one for each robot, to create episodes and it works but I am trying to use a single EPL statement to do it. I have tried win:time_accum(3sec) group by robot but the second example output: Array[0]={Event 1,Event 2, Event 4} Array[1]={Event 3} as the time window is shifted everytime an event comes in, the system still thinks that event 4 is less than 3 sec due to event 3. how do I create a unique time window for each robot? Thank you for your suggestions and any help is greatly appreciated.

    Read the article

  • Where is mpx386.6 and start.c in Minix 3.2?

    - by John Bowlinger
    I'm trying to follow along in Operating Systems and Implementation 3rd edition and I'm now at the part in the book where Tanenbaum is discussing bootup and kernel process switching. He keeps referring to these 2 files (mpx386.s, start.c) that are supposedly in a directory called kernel, but I can't seem to find them. In the root directory, when I go to boot/minix/3.2.0/kernel, kernel just seems to be a binary file that is illegible in terminal. There also seems to be a bunch of mod01-mod12 gz binary files as well in the 3.2.0 directory. Am I in the wrong directory, or is there something I need to install and do to read kernel? I would like to follow along with the book to what's on my screen, instead of constantly flipping back and forth. I realize alot of files are completely different from this book published in 2006 and I accept that, but this seems to be a critical juncture of the book and the operating system as a whole. If it's any consolation, I'm running the OS in Virtualbox on a 64-bit Macbook.

    Read the article

  • How to use interfaces in exception handling

    - by vikp
    Hi, I'm working on the exception handling layer for my application. I have read few articles on interfaces and generics. I have used inheritance before quite a lot and I'm comfortable with in that area. I have a very brief design that I'm going to implement: public interface IMyExceptionLogger { public void LogException(); // Helper methods for writing into files,db, xml } I'm slightly confused what I should be doing next. public class FooClass: IMyExceptionLogger { // Fields // Constructors } Should I implement LogException() method within FooClass? If yes, than I'm struggling to see how I'm better of using an interface instead of the concrete class... I have a variety of classes that will make a use of that interface, but I don't want to write an implementation of that interface within each class. In the same time If I implement an interface in one class, and then use that class in different layers of the application I will be still using concrete classes instead of interfaces, which is a bad OO design... I hope this makes sense. Any feedback and suggestions are welcome. Please notice that I'm not interested in using net4log or its competitors because I'm doing this to learn. Thank you Edit: Wrote some more code. So I will implement variety of loggers with this interface, i.e. DBExceptionLogger, CSVExceptionLogger, XMLExceptionLogger etc. Than I will still end up with concrete classes that I will have to use in different layers of my application.

    Read the article

  • Accessing red5 server outside the localhost

    - by user1039290
    I am new on red5 server so I stuck in. I am trying to record videos from webcam and save them in to my server. To do this, I installed red5 to my server. In addition, I also downloaded red5recorder and put it into my webapps folder. But there is any information about its implementation details. Whatever. So I go on with Red5 SimpleRecorder tutorial. Everything works fine when I tried in my server, but there is a problem when I try to connect to server from other computer to record a video. Actually, video recording is handled but the recorded video is not uploaded to the server. When I work in localhost it works fine, but from outside I couldn't be able to record or upload the video. I change the red5-web.properties document, and set virtual host to my server's IP but it again only works in localhost. What could be the reason? Is it about file permissions? or what could it be? Kind regards, Can

    Read the article

  • Wpf ItemsControl with datatemplate, problem with doubled border for some items

    - by ksirg
    Hi, I have simple ItemsControl with custom datatemplate, template contains only textblock with border. All items should be displayed vericaly one after another, but some items have extra border. How can I remove it? I want to achieve something similar to enso launcher, it looks like My implementation looks like this here is my xaml code: <Window x:Class="winmole.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" x:Name="hostWindow" Height="Auto" MinHeight="100" MinWidth="100" Width="Auto" Padding="10" AllowsTransparency="True" WindowStyle="None" Background="Transparent" Top="0" Left="0" SizeToContent="WidthAndHeight" Topmost="True" Loaded="Window_Loaded" KeyUp="Window_KeyUp" > <Window.Resources> <!--Simple data template for Items--> <DataTemplate x:Key="itemsTemplate"> <Border Background="Black" Opacity="0.9" HorizontalAlignment="Left" CornerRadius="0,2,2,0"> <TextBlock Text="{Binding Path=Title}" TextWrapping="Wrap" FontFamily="Georgia" FontSize="30" Height="Auto" HorizontalAlignment="Left" VerticalAlignment="Stretch" TextAlignment="Left" Padding="5" Margin="0" Foreground="Yellow"/> </Border> </DataTemplate> </Window.Resources> <DockPanel> <ItemsControl DockPanel.Dock="Bottom" Name="itcPrompt" ItemsSource="{Binding ElementName=hostWindow, Path=DataItems}" ItemTemplate="{StaticResource itemsTemplate}" > <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <WrapPanel Orientation="Vertical" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> </DockPanel>

    Read the article

  • Why doesn't my UIViewController class keep track of an NSArray instance variable.

    - by TaoStoner
    Hey, I am new to Objective-C 2.0 and Xcode, so forgive me if I am missing something elementary here. Anyways, I am trying to make my own UIViewController class called GameView to display a new view. To work the game I need to keep track of an NSArray that I want to load from a plist file. I have made a method 'loadGame' which I want to load the correct NSArray into an instance variable. However it appears that after the method executes the instance variable loses track of the array. Its easier if I just show you the code.... @interface GameView : UIViewController { IBOutlet UIView *view IBOutlet UILabel *label; NSArray *currentGame; } -(IBOutlet)next; -(void)loadDefault; ... @implementation GameView - (IBOutlet)next{ int numElements = [currentGame count]; int r = rand() % numElements; NSString *myString = [currentGame objectAtIndex:(NSUInteger)r]; [label setText: myString]; } - (void)loadDefault { NSDictionary *games; NSString *path = [[NSBundle mainBundle] bundlePath]; NSString *finalPath = [path stringByAppendingPathComponent:@"Games.plist"]; games = [NSDictionary dictionaryWithContentsOfFile:finalPath]; currentGame = [games objectForKey:@"Default"]; } when loadDefault gets called, everything runs perfectly, but when I try to use the currentGame NSArray later in the method call to next, currentGame appears to be nil. I am also aware of the memory management issues with this code. Any help would be appreciated with this problem.

    Read the article

  • Not all symbols of an DLL-exported class is exported (VS9)

    - by mandrake
    I'm building a DLL from a group of static libraries and I'm having a problem where only parts of classes are exported. What I'm doing is declaring all symbols I want to export with a preprocessor definition like: #if defined(MYPROJ_BUILD_DLL) //Build as a DLL # define MY_API __declspec(dllexport) #elif defined(MYPROJ_USE_DLL) //Use as a DLL # define MY_API __declspec(dllimport) #else //Build or use as a static lib # define MY_API #endif For example: class MY_API Foo{ ... } I then build static library with MYPROJ_BUILD_DLL & MYPROJ_USE_DLL undefined causing a static library to be built. In another build I create a DLL from these static libraries. So I define MYPROJ_BUILD_DLL causing all symbols I want to export to be attributed with __declspec(dllexport) (this is done by including all static library headers in the DLL-project source file). Ok, so now to the problem. When I use this new DLL I get unresolved externals because not all symbols of a class is exported. For example in a class like this: class MY_API Foo{ public: Foo(char const* ); int bar(); private: Foo( char const*, char const* ); }; Only Foo::Foo( char const*, char const*); and int Foo::bar(); is exported. How can that be? I can understand if the entire class was missing, due to e.g. I forgot to include the header in the DLL-build. But it's only partial missing. Also, say if Foo::Foo( char const*) was not implemented; then the DLL build would have unresolved external errors. But the build is fine (I also double checked for declarations without implementation). Note: The combined size of the static libraries I'm combining is in the region of 30MB, and the resulting DLL is 1.2MB. I'm using Visual Studio 9.0 (2008) to build everything. And Depends to check for exported symbols.

    Read the article

  • problem with kCFSocketReadCallBack

    - by zp26
    Hello. I have a problem with my program. I created a socket with "kCFSocketReadCallBack. My intention was to call the "acceptCallback" only when it receives a string to the socket. Instead my program does not just accept the connection always goes into "startReceive" stop doing so and sometimes crash the program. Can anybody help? Thanks readSocket = CFSocketCreateWithNative( NULL, fd, kCFSocketReadCallBack, AcceptCallback, &context ); static void AcceptCallback(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) // Called by CFSocket when someone connects to our listening socket. // This implementation just bounces the request up to Objective-C. { ServerVistaController * obj; #pragma unused(address) // assert(address == NULL); assert(data != NULL); obj = (ServerVistaController *) info; assert(obj != nil); #pragma unused(s) assert(s == obj->listeningSocket); if (type & kCFSocketAcceptCallBack){ [obj acceptConnection:*(int *)data]; } if (type & kCFSocketAcceptCallBack){ [obj startReceive:*(int *)data]; } } -(void)startReceive:(int)fd { CFReadStreamRef readStream = NULL; CFIndex bytes; UInt8 buffer[MAXLENGTH]; CFStreamCreatePairWithSocket( kCFAllocatorDefault, fd, &readStream, NULL); if(!readStream){ close(fd); [self updateLabel:@"No readStream"]; } CFReadStreamOpen(readStream); [self updateLabel:@"OpenStream"]; bytes = CFReadStreamRead( readStream, buffer, sizeof(buffer)); if (bytes < 0) { [self updateLabel:(NSString*)buffer]; close(fd); } CFReadStreamClose(readStream); }

    Read the article

  • Conversion between different template instantiation of the same template

    - by Naveen
    I am trying to write an operator which converts between the differnt types of the same implementation. This is the sample code: template <class T = int> class A { public: A() : m_a(0){} template <class U> operator A<U>() { A<U> u; u.m_a = m_a; return u; } private: int m_a; }; int main(void) { A<int> a; A<double> b = a; return 0; } However, it gives the following error for line u.m_a = m_a;. Error 2 error C2248: 'A::m_a' : cannot access private member declared in class 'A' d:\VC++\Vs8Console\Vs8Console\Vs8Console.cpp 30 Vs8Console I understand the error is because A<U> is a totally different type from A<T>. Is there any simple way of solving this (may be using a friend?) other than providing setter and getter methods? I am using Visual studio 2008 if it matters.

    Read the article

  • Autoscale Font in a TextBox Control so that its as big as possible and still fits in text area bound

    - by blak3r
    I need a TextBox or some type of Multi-Line Label control which will automatically adjust the font-size to make it as large as possible and yet have the entire message fit inside the bounds of the text area. I wanted to see if anyone had implemented a user control like this before developing my own. Example application: have a TextBox which will be half of the area on a windows form. When a message comes in which is will be approximately 100-500 characters it will put all the text in the control and set the font as large as possible. An implementation which uses Mono Supported .NET libraries would be a plus. Thanks in advance. If know one has implemented a control already... If someone knows how to test if a given text completely fits inside the text area that would be useful for if I roll my own control. Edit: I ended up writing an extension to RichTextBox. I will post my code shortly once i've verified that all the kinks are worked out.

    Read the article

  • CXF code first service, WSDL generation; soap:address changes?

    - by jcalvert
    I have a simple Java interface/implementation I am exposing via CXF. I have a jaxws element in my Spring configuration file like this: <jaxws:endpoint id="managementServiceJaxws" implementor="#managementService" address="/jaxws/ManagementService" > </jaxws:endpoint> It generates the WSDL from my annotated interface and exposes the service. Then when I hit http://myhostname/cxf/jaxws/ManagementService?wsdl I get a lovely WSDL. At the bottom in the wsdl:service element, I'll see <soap:address location="http://myhostname/cxf/jaxws/ManagementService"/> However, some time a day or so later, with no application restart, hitting that same url produces: This causes a number of problems, but what I really want is to fix it. Right now, there's a particular client to the webservice that sets the endpoint to localhost; because it runs on the same machine. Is it possible the wsdl is getting regenerated and cached and then exposing the 'localhost' version? In part I don't know the exact mechanism by which one goes from a ?wsdl request in CXF to the response. It seems almost certain that it's retrieving some cached version, given that it's supposed to be determining the address by asking the servletcontainer (Jetty). For reference I know a stopgap solution is using the hostname on the client and making sure an alias in place so that it goes over the loopback. EDIT: For reference, I confirmed that if I bring my application up and first hit it over localhost, then querying for the wsdl via the hostname shows the address as localhost. Conversely, first hitting it over the hostname causes localhost requests to show the hostname. So obviously something is getting cached here.

    Read the article

  • Java: Tracking a user login session - Session EJBs vs HTTPSession

    - by bguiz
    If I want to keep track of a conversational state with each client using my web application, which is the better alternative - a Session Bean or a HTTP Session - to use? Using HTTP Session: //request is a variable of the class javax.servlet.http.HttpServletRequest //UserState is a POJO HttpSession session = request.getSession(true); UserState state = (UserState)(session.getAttribute("UserState")); if (state == null) { //create default value .. } String uid = state.getUID(); //now do things with the user id Using Session EJB: In the implementation of ServletContextListener registered as a Web Application Listener in WEB-INF/web.xml: //UserState NOT a POJO this this time, it is //the interface of the UserStateBean Stateful Session EJB @EJB private UserState userStateBean; public void contextInitialized(ServletContextEvent sce) { ServletContext servletContext = sce.getServletContext(); servletContext.setAttribute("UserState", userStateBean); ... In a JSP: public void jspInit() { UserState state = (UserState)(getServletContext().getAttribute("UserState")); ... } Elsewhere in the body of the same JSP: String uid = state.getUID(); //now do things with the user id It seems to me that the they are almost the same, with the main difference being that the UserState instance is being transported in the HttpRequest.HttpSession in the former, and in a ServletContext in the case of the latter. Which of the two methods is more robust, and why?

    Read the article

  • Specifying SOAP Headers for a Zend_Soap Service

    - by Stephen
    I have a generally straight forward web service that I've written (converting code to ZF from a Java implementation of the same service and trying to maintain the same wsdl structure as much as possible). The service loads a PHP class, rather than individual functions. The PHP class contains three different functions within it. Everything seems to be working just fine, except that I can't seem to figure out how to specify that a given function parameter should be passed as a SOAP header. I've not seen any mention of SOAP headers in the Server context, only how to pass header parameters with a client to a server. In addition to the standard parameters for the function that would be sent in the SOAP body and detailed in the docblock, I would like to specify two parameters (a username and password) that would be sent in a SOAP header. I have to assume this is possible, but haven't been able to find anything online, nor have I had any responses to a similar post on Zend's forum. Is there something that can be added in the docblock area to specify a parameter as a header (maybe in a similar fashion to using WebParam?)? Any suggestions/examples on how to get this accomplished would be greatly appreciated!

    Read the article

  • MVC design for archived data view

    - by Hemant Tank
    Implementation of a standard archive process in ASP.Net MVC. Backend SQL Server 2005 We've an existing web app built in MVC. We've an Entity "Claim" and it has some child entities like ClaimDetails, Files, etc... A pretty standard setup in DB. Each entity has its own table and are linked via FK. Now, we need to have an "Archive" feature in web app which will allow admin to archive a Claim and its child entities. An archived Claim shud become readonly when visited again. Here're some points on which I need your valued opinion - To keep it simple and scalable (for a few million records) for now we plan to simply add a bit field "Archived" to the Claim table in db. And change the behavior accordingly in the web app. We've a 'Manage claim' page which renders a bunch of diff views for Claim and its child entities. Now, for a readonly view we can either use the same views or have a separate set of views. What do you suggest? At controller level, we can identify archived claim and select which view to render. At model level, though it'd be great to be able to use the same model used for Manage Claim - but it might not get us the "text" of some lookup fields. For example, Claim.BrandId is rendered as a dropdown in Manage claim (requires only BrandId) but for readonly view we need 'BrandText'. Any existing ref or architecture level example would be great. Here's my prev SO post but its more about db level changes: Design a process to archive data (SQL Server 2005) Thank you.

    Read the article

  • SIlverlight Navigate: how does it work? How would you implement in f# w/o VS wizards and helpers?

    - by akaphenom
    After a nights sleep the problem can be stated more accurately as I have a 100% f# / silverlight implementation and am looking to use the built in Navigation components. C# creates page.xaml and page.xaml.cs um - ok; but what is the relationship at a fundamental level? How would I go about doing this in f#? The applcuation is loaded in the default module, and I pull the XAML in and reference it from the application object. Do I need to create instances / references to the pages from within the application object? Or set up some other page management object with the proper name value pairs? When all the Help of VS is stripped away - what are we left with? original post (for those who may be reading replies) I have a 100% silverlight 3.0 / f# 2.0 application I am wrapping my brain around. I have the base application loading correctly - and now I want to add the naigation controls to it. My page is stored as an embedded resource - but the Frame.Navigate takes a URI. I know what I have is wrong but here it is: let nav : Frame = mainGrid ? mainFrame let url = "/page1.xaml" let uri = new System.Uri(url, System.UriKind.Relative) ; nav.Navigate uri Any thoughts?

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >