Search Results

Search found 3423 results on 137 pages for 'tom daniel'.

Page 17/137 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Flex service in debug

    - by Tom
    Hello everybody, I am trying to learn the new services method in flex 4. but i can´t get it work. A test oparation near the service in flash builder 4 works. But when i run the code i get NetConnection.Call.Failed: HTTP: Failed. Does somebody knows what the problem can be? Tom CODE: PHP <?php class AuthService { public function login($username, $password) { return 'ok'; } public function logout() { return true; } } ?> FLEX <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" xmlns:authservice="services.authservice.*"> <fx:Script> <![CDATA[ import mx.controls.Alert; protected function button_clickHandler(event:MouseEvent):void { loginResult.token = authService.login(username, password); } ]]> </fx:Script> <fx:Declarations> <s:CallResponder id="loginResult"/> <authservice:AuthService id="authService" fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)" showBusyCursor="true"/> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:Button x="97" y="193" label="Button" id="button" click="button_clickHandler(event)"/> <s:TextInput x="91" y="87" id="username"/> <s:TextInput x="97" y="117" id="password"/> </s:Application

    Read the article

  • Java NIO (Netty): How does Encryption or GZIPping work in theory (with filters)

    - by Tom
    Hello Experts, i would be very thankfull if you can explain to me, how in theory the "Interceptor/Filter" Pattern in ByteStreams (over Sockets/Channels) work (in Asynchronous IO with netty) in regard to encryption or compression of data. Given I have a Filter that does GZIPPING. How is this internally implemented? Does the Filter "collect" so many bytes form the channel, that this is a usefull number of bytes that can then be en/decoded? What is in general the minimal "blocksize(data to encode/decode in a chunk)" of socket based gzipping? Does this "blocksize" have to be negotiated in advance between server and client? What happens if the client does not send enough data to "fill" the blocksize (due to a network conquestion) but does not close the connection. Does this mean the other side will simply wait until it gets enough bytes to decode or until a timeout occoures...How is the Filter pattern the applied? The compression filter will de/compress the blocksize of bytes and then store them again in the same buffer would (in the case of netty) i normally be using the ChannelHanlderContext to pass the de/encoded data to the next filter?... Any explanations/links/tutorials (for beginners;-) will be very much appreciated to help me understand how for example encryption/compressing are implemented in socket based communication with filters/interceptor pattern. thank you very much tom

    Read the article

  • Majordomo/Mailman - Yahoo/Google Groups

    - by tom smith
    Hi. Not an app question, but thought that I might ask here anyway! The responses might help someone. I've got an app where we're going to be dealing with 5000-10000 people in a group/pool. Periodically, different subsets of people will break off in their own group. I'm looking for thoughts on how to manage/approach this situation. (For now, all of this is being managed on a few servers in the garage, with dynamic IP) I've looked into the Yahoo/google groups, and they seem to be reasonable. The primary issue that I see with this approach, is that I don't have a good way of quickly/easily alowing a subset of the group to form their own group for a given project. This kind of function is critical. The upside to this though, I wouldn't have to really set anything up. And the hosted groups could send emails to the user all day along, without running into cap/bandwidth limits The other approach is a managed list, like mailman/majordomo. This approach appears to be more flexible, and looks like ti could be modified to handle the quick creation of lists, allowing users to quickly be assigned to different lists on the fly.. The downside, I'd have to run my own Mailman/Majordomo instance, as well as the associated mail server. Or I could look at possibly using one of the hosted service.. But this is for a project on the cheap, so we're really trying to keep costs down. Thoughts/Comments/Pointers would be greatly appreciated. Thanks in advance -tom

    Read the article

  • Workflow engine BPMN, Drools, etc or ESB?

    - by Tom
    We currently have an application that is based on an in-house developed workflow engine with YAML based DSL. We are looking to move parts of it to Java. I have discovered a number of java solutions like Intalio, JBPM, Drools Expert, Drools Flow etc. They appear to be aimed at businesses where the business analyst creates the workflows using a graphical editor and submits them to the workflow engine. They seem geared towards ease of use for non-technical people rather than for developers with a focus on human interaction. The workflows tend to look like. Discover-a-file -\ -> join -> process-file -> move-file -> register-file Discover-some-metadata -/ If any step fails we need to retry it X times. We also need to be able to stop the system and be able to restart it and have it continue from where it was (durable). Some of our workflows can be defined by a set of goals we need to achieve so Jess's backwards rule chaining sounds interesting but it is not open source. It might be that what we are after is a Finite State Machine engine or just an Enterprise Service Bus and do everything as JMS queues. Is there a good open source workflow engine that is both standards-based but also geared towards developers. We don't particular want to use a graphical workflow designer or write reams of XML and it should ideally be in Java or language agnostic (makes REST/Soap calls to external services). Thanks, Tom

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • Entity framework memory leak after detaching newly created object

    - by Tom Peplow
    Hi, Here's a test: WeakReference ref1; WeakReference ref2; TestRepositoryEntitiesContainer context; int i = 0; using (context = GetContext<TestRepositoryEntitiesContainer>()) { context.ObjectMaterialized += (o, s) => i++; var item = context.SomeEntities.Where(e => e.SomePropertyToLookupOn == "some property").First(); context.Detach(item); ref1 = new WeakReference(item); var newItem = new SomeEntity {SomePropertyToLookupOn = "another value"}; context.SomeEntities.AddObject(newItem); ref2 = new WeakReference(newItem); context.SaveChanges(); context.SomeEntities.Detach(newItem); newItem = null; item = null; } context = null; GC.Collect(); Assert.IsFalse(ref1.IsAlive); Assert.IsFalse(ref2.IsAlive); First assert passes, second fails... I hope I'm missing something, it is late... But it appears that detaching a fetched item will actually release all handles on the object letting it be collected. However, for new objects something keeps a pointer and creates a memory leak. NB - this is EF 4.0 Anyone seen this before and worked around it? Thanks for your help! Tom

    Read the article

  • web design question (php/ajax)

    - by tom smith
    Hi guys... Hope this isn't a waste of your time. I'm working on a project, and it occured to me that there's a chunk of code out there, that should allow me to see how others have implemented this. I've got a project where I'm going to have a page, with a sel box. the user will select an item from the selList, and based on the item selected, a separate section of the page (areaB) will change in terms of the content/tbls being displayed. i then want to allow the user to go through a series of subpages in areaB, where the user goes through a submit/cancel/confirm process, where the stuff in areaB changes, with the rest of the page remaining the same... i'm trying to figure out the best approach to implement the on both client/server side. i could just have an ugly "if block" where i have abunch of logic, and i completely regen the page each time the user selects an action.. i could have an approach that might involve divs/frames, where i then just regen the targeted frame/div area.. is this even possible?? i could have some form of ajaxy process, which would only alter the targeted section(s) of the page... so.. i'm trying to talk to anyone who has ideas on how to do this, or more ideally, if you know of a good code (client/server) side example of this... that i can examine. i'd really appreciate it!! i've got a more detailed overview but didn't know if it would be cool to post it here... thanks.. tom

    Read the article

  • A way to specify a different host in an SSH tunnel from the host in use

    - by Tom
    I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server). I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel). The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests. At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command: ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1 I can tell Subversion that the repo. is located at: https://username.svn.beanstalkapp.com:8080/repo-name This uses my tunnel and the username and password are accepted. So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?

    Read the article

  • call addsubview again causes slowdown

    - by Tom
    hi guys, i am writing a little music-game for the iphone. I am almost done, this is the only issue which keeps me from rolling it out. any help to solve this is much appreciated. this is what i do: at my appDelegate I add my menu-view-screen to the window. the menu-view-screen acts as a container and controls which view gets presented to the user. means, on the menu-view-screen I got 4 buttons (new game, options, faq, highscore). when the user clicks on a button something as this happens: if (self.gameViewController == nil) { GameViewController *viewController = [[GameViewController alloc] initWithNibName:@"GameViewController" bundle:nil]; self.gameViewController = viewController; [viewController release]; } [self.view addSubview:self.gameViewController.view]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(handleSwitchViewNotificationFromGameView:) name:@"SwitchView" object:gameViewController]; when the user returns to the menu, this piece of code gets executed: [[NSNotificationCenter defaultCenter] removeObserver:self]; [self.gameViewController viewWillDisappear:YES]; [self.gameViewController.view removeFromSuperview]; this works fine for all screens but not for the gamescreen(well this is the only one with heaps of user-interaction) means the responsiveness of the iphone(when playing tones) gets really slow. The performance is fine when I display the gameview for the first time. it starts getting slower as soon as I add it to the menu-views-container-subviews again (addsubview) (basically open up a new game) any ideas what causes(or to get around) this? thanks heaps Best regards Tom

    Read the article

  • Symfony: How to hide form fields from display and then set values for them in the action class

    - by Tom
    I am fairly new to symfony and I have 2 fields relating to my table "Pages"; created_by and updated_by. These are related to the users table (sfGuardUser) as foreign keys. I want these to be hidden from the edit/new forms so I have set up the generator.yml file to not display these fields: form: display: General: [name, template_id] Meta: [meta_title, meta_description, meta_keywords] Now I need to set the fields on the save. I have been searching for how to do this all day and tried a hundred methods. The method I have got working is this, in the actions class: protected function processForm(sfWebRequest $request, sfForm $form) { $form_params = $request->getParameter($form->getName()); $form_params['updated_by'] = $this->getUser()->getGuardUser()->getId(); if ($form->getObject()->isNew()) $form_params['created_by'] = $this->getUser()->getGuardUser()->getId(); $form->bind($form_params, $request->getFiles($form->getName())); So this works. But I get the feeling that ideally I shouldnt be modifying the web request, but instead modifying the form/object directly. However I havent had any success with things like: $form->getObject()->setUpdatedBy($this->getUser()->getGuardUser()); If anyone could offer any advice on the best ways about solving this type of problem I would be very grateful. Thanks, Tom

    Read the article

  • Custom Solr sorting

    - by Tom
    Hello everyone, I've been asked to do an evaluation of Solr as an alternative for a commercial search engine. The application now has a very particular way of sorting results using something called "buckets". I'll try to explain with a bit of details: In the interface they have 2 fields: "what" and "where". Both fields are actually sets of fields (what = category, name, contact info... and where= country, state, region, city...) so the copyfield feature of Solr immediately comes to mind. Now based on the field generated the actual match the result should end up in a specific bucket. In particular the first bucket contains all the result documents that have an exact match on the category field, in the second bucket all exact matches on name, the third partial matches on category, the fourth partial matches on name, the fifth matches on contact info etc... Then within each of those first tier buckets all results are placed in second tier buckets depending on what location was matched: city, then region, then province and so on. To even complicate things more there is also a third tier bucket where results are placed according to the value of a ranking field: all documents with the value 1 in the ranking field go in bucket 1 and so on. And finally results should be randomized in the third tier bucket... On top of this they obviously want support for facets and paging. My apologies for the long mail but I would greatly appreciate feedback and/or suggestions. I'm aware that this that this is a very particular problem but everything that points me in the right direction is helpful. Cheers, Tom

    Read the article

  • Java Concurrency : Volatile vs final in "cascaded" variables?

    - by Tom
    Hello Experts, is final Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); the same as volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); in case the inner Map is accessed by different Threads? or is even something like this required: volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); volatile Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); In case the it is NOT a "cascaded" map, final and volatile have in the end the same effect of making shure that all threads see always the correct contents of the Map... But what happens if the Map iteself contains a map, as in the example... How do I make shure that the inner Map is correctly "Memory barriered"? Tanks! Tom

    Read the article

  • How to detect if a certain range resides (partly) within an other range?

    - by Tom
    Lets say I've got two squares and I know their positions, a red and blue square: redTopX; redTopY; redBotX; redBotY; blueTopX; blueTopY; blueBotX; blueBotY; Now, I want to check if square blue resides (partly) within (or around) square red. This can happen in a lot of situations, as you can see in this image I created to illustrate my situation better: Note that there's always only one blue and one red square, I just added multiple so I didn't have to redraw 18 times. My original logic was simple, I'd check all corners of square blue and see if any of them are inside square red: if ( ((redTopX >= blueTopX) && (redTopY >= blueTopY) && (redTopX <= blueBotX) && (redTopY <= blueBotY)) || //top left ((redBotX >= blueTopX) && (redTopY >= blueTopY) && (redBotX <= blueBotX) && (redTopY <= blueBotY)) || //top right ((redTopX >= blueTopX) && (redBotY >= blueTopY) && (redTopX <= blueBotX) && (redBotY <= blueBotY)) || //bottom left ((redBotX >= blueTopX) && (redBotY >= blueTopY) && (redBotX <= blueBotX) && (redBotY <= blueBotY)) //bottom right ) { //blue resides in red } Unfortunately, there are a couple of flaws in this logic. For example, what if red surrounds blue (like in situation 1)? I thought this would be pretty easy but am having trouble coming up with a good way of covering all these situations.. can anyone help me out here? Regards, Tom

    Read the article

  • Best way for an external (remote) graphics designer to style ASP.NET MVC 4 app?

    - by Tom K
    My customer has his own graphics designer he wants to use to style his web application we're building in ASP.NET MVC 4. Our solution is in Bitbucket, but if he can't run it what choices do we have? I doubt he uses Visual Studio 2012. One idea is for us to publish to our solution to a file system, send it to him, have him create a local IIS website on his machine (assuming he isn't using a Mac). Mocking data or pointing to a test SQL in Azure isn't a problem. Then he can make changes to .css and .cshtml files. Will this even work? The point is that he needs to be able to test his changes. I know he can modify the views and just check-in. But he needs to deliver a working design. So it seems inefficient. The graphics designer will have access to our test site so he can see how it works, what data we have and fields. Another idea is for him to build a static mock site using just HTML/CSS. Later I'd integrate his styles into customer's solution, split his html into partial views which we use and add Razor syntax. Again, we'd like to leverage graphics designer for all of this. Is there a best practice documented around this subject? How do other teams deal with this situation?

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

  • Install usblib package - Ubuntu

    - by Tom celic
    I need the package libusb for another package I am installing. I tried the following which seemed to install the package, sudo apt-get install libusb-dev but when I try to install the other package I get, configure: error: Package requirements (libusb-1.0 >= 0.9.1) were not met: No package 'libusb-1.0' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables LIBUSB_CFLAGS and LIBUSB_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. When I run the command dpkg -L libusb-dev, I get: /. /usr /usr/bin /usr/bin/libusb-config /usr/include /usr/include/usb.h /usr/lib /usr/lib/libusb.a /usr/lib/libusb.la /usr/lib/pkgconfig /usr/lib/pkgconfig/libusb.pc /usr/share /usr/share/doc /usr/share/doc/libusb-dev /usr/share/doc/libusb-dev/html /usr/share/doc/libusb-dev/html/index.html /usr/share/doc/libusb-dev/html/preface.html /usr/share/doc/libusb-dev/html/intro.html /usr/share/doc/libusb-dev/html/intro-overview.html /usr/share/doc/libusb-dev/html/intro-support.html /usr/share/doc/libusb-dev/html/api.html /usr/share/doc/libusb-dev/html/api-device-interfaces.html /usr/share/doc/libusb-dev/html/api-timeouts.html /usr/share/doc/libusb-dev/html/api-types.html /usr/share/doc/libusb-dev/html/api-synchronous.html /usr/share/doc/libusb-dev/html/api-return-values.html /usr/share/doc/libusb-dev/html/functions.html /usr/share/doc/libusb-dev/html/ref.core.html /usr/share/doc/libusb-dev/html/function.usbinit.html /usr/share/doc/libusb-dev/html/function.usbfindbusses.html /usr/share/doc/libusb-dev/html/function.usbfinddevices.html /usr/share/doc/libusb-dev/html/function.usbgetbusses.html /usr/share/doc/libusb-dev/html/ref.deviceops.html /usr/share/doc/libusb-dev/html/function.usbopen.html /usr/share/doc/libusb-dev/html/function.usbclose.html /usr/share/doc/libusb-dev/html/function.usbsetconfiguration.html /usr/share/doc/libusb-dev/html/function.usbsetaltinterface.html /usr/share/doc/libusb-dev/html/function.usbresetep.html /usr/share/doc/libusb-dev/html/function.usbclearhalt.html /usr/share/doc/libusb-dev/html/function.usbreset.html /usr/share/doc/libusb-dev/html/function.usbclaiminterface.html /usr/share/doc/libusb-dev/html/function.usbreleaseinterface.html /usr/share/doc/libusb-dev/html/ref.control.html /usr/share/doc/libusb-dev/html/function.usbcontrolmsg.html /usr/share/doc/libusb-dev/html/function.usbgetstring.html /usr/share/doc/libusb-dev/html/function.usbgetstringsimple.html /usr/share/doc/libusb-dev/html/function.usbgetdescriptor.html /usr/share/doc/libusb-dev/html/function.usbgetdescriptorbyendpoint.html /usr/share/doc/libusb-dev/html/ref.bulk.html /usr/share/doc/libusb-dev/html/function.usbbulkwrite.html /usr/share/doc/libusb-dev/html/function.usbbulkread.html /usr/share/doc/libusb-dev/html/ref.interrupt.html /usr/share/doc/libusb-dev/html/function.usbinterruptwrite.html /usr/share/doc/libusb-dev/html/function.usbinterruptread.html /usr/share/doc/libusb-dev/html/ref.nonportable.html /usr/share/doc/libusb-dev/html/function.usbgetdrivernp.html /usr/share/doc/libusb-dev/html/function.usbdetachkerneldrivernp.html /usr/share/doc/libusb-dev/html/examples.html /usr/share/doc/libusb-dev/html/examples-code.html /usr/share/doc/libusb-dev/html/examples-tests.html /usr/share/doc/libusb-dev/html/examples-other.html /usr/share/doc/libusb-dev/copyright /usr/share/doc-base /usr/share/doc-base/libusb-dev /usr/share/man /usr/share/man/man1 /usr/share/man/man1/libusb-config.1.gz /usr/lib/libusb.so /usr/share/doc/libusb-dev/changelog.Debian.gz Any ideas??

    Read the article

  • How can I obtain a HBITMAP or HICON from a Direct2D bitmap?

    - by Tom
    Is there any way to obtain a HBITMAP or HICON from a ID2D1Bitmap * using Direct2D? I am using this function to load the bitmap. The reason I ask is because I am creating my level editor tool and would like to draw a PNG image on a standard button control. I know that you can do this using GDI+: HBITMAP hBitmap; Gdiplus::Bitmap b(L"a.png"); b.GetHBITMAP(NULL, &hBitmap); SendMessage(GetDlgItem(hDlg, IDC_BUTTON1), BM_SETIMAGE, IMAGE_BITMAP, (LPARAM)hBitmap); Is there any equivalent, simple solution using Direct2D? If possible, I would like to render multiple PNG files (some with transparency) on a single button.

    Read the article

  • svn connection timeout

    - by Tom celic
    I have Ubuntu 12.04 running in virtual box inside Windows 7. I have the network adapter set as NAT and everything networking wise seems to be running smoothly (internet / git ect.). However, for some reason, svn always times out when i.e michael@michael-VirtualBox:~/Documents/deleteme$ svn co svn://svn.openwrt.org/openwrt/trunk/ svn: Can't connect to host 'svn.openwrt.org': Connection timed out Somebody suggested to me that I might need to change what ports svn uses. Does anybody have any idea how to diagnose / solve the problem? Thanks!

    Read the article

  • Example WLST Script to Obtain JDBC and JTA MBean Values

    - by Daniel Mortimer
    Introduction Following on from the blog entry "Get an Offline or Online WebLogic Domain Summary Using WLST!", I have had a request to create a smaller example which only collects a selection of JDBC (System Resource) and JTA configuration and runtime MBeans values. So, here it is. Download Sample Script You can grab the sample script by clicking here. Instructions to Run: 1. After download, extract the zip to the machine hosting the WebLogic environment. You should have three directories along with a readme.txt output Sample_Output scripts 2. In the scripts directory, find the start wrapper script startWLSTJDBCSummarizer.sh (Unix) or startWLSTJDBCSummarizer.cmd (MS Windows). Open the appropriate file in an editor and change the environment variable settings to suit your system. Example - startWLSTDomainSummarizer.cmd set WL_HOME=D:\product\FMW11g\wlserver_10.3 set DOMAIN_HOME=D:\product\FMW11g\user_projects\domains\MyDomain set WLST_OUTPUT_PATH=D:\WLSTDomainSummarizer\output\ set WLST_OUTPUT_FILE=WLST_JDBC_Summary_Via_MBeans.html call "%WL_HOME%\common\bin\wlst.cmd" WLS_JDBC_Summary_Online.py Note: The WLST_OUTPUT_PATH directory value must have a trailing slash. If there is no trailing slash, the script will error and not continue.  3. Run the shell / command line wrapper script. It should launch WLST and kick off "WLS_JDBC_Summary_Online.py". This will hit you with some prompts e.g. Is your domain Admin Server up and running and do you have the connection details? (Y /N ): Y Enter connection URL to Admin Server e.g t3://mymachine.acme.com:7001 : t3://localhost:7001 Enter weblogic username: weblogic Enter weblogic username password (function prompt 1): welcome1 (Note: the value typed in for password will not be echoed back to the console). 4. If the scripts run successfully, you should get a HTML summary in the specified output directory. See example screenshots below: Screenshot 1 - JDBC System Resource Tab Page  Screenshot 2 - JTA Tab Page 5. For the HTML to render correctly, ensure the .js and .css files provided (review the output directory created by the zip file extraction) are accessible. For example, to view the HTML locally (without using a web server), place the HTML output, jquery-ui.js, spry.js and wlstsummarizer.css in the same directory. Disclaimer This is a sample script. I have tested it against WebLogic Server 10.3.6 domains on MS Windows and Unix.  I cannot guarantee that the script will run error free or produce the expected output on your system. If you have any feedback add a comment to the blog. I will endeavour to fix any problems with my WLST code. Credits JQuery: http://jquery.com/ Spry (Adobe) : https://github.com/adobe/Spryhttp://www.red-team-design.com/cool-headings-with-pseudo-elements

    Read the article

  • Objective-C As A First OOP Language?

    - by Daniel Scocco
    I am just finishing the second semester of my CS degree. So far I learned C, all the fundamental algorithms and data structures (e.g., searching, sorting, linked lists, heaps, hash tables, trees, graphs, etc). Next year we'll start with OOP, using either Java or C++. Recently I got some ideas for some iPhone apps and got itchy to start working on them. However I heard some bad things about Objectice-C in the past, so I am wondering if learning it as my first OOP language could be a problem. Not to mention that I think it will be hard to find books/online courses that teach basic OOP concepts using Objective-C to illustrate the concepts (as opposed to books using Java or C++, which are plenty), so this could be another problem. In summary: should I start learning Objective-C and OOP concepts right now by my own, or wait one more semester until I learn Java/C++ at university and then jump into Objective-C? Update: For those interested in getting started with OOP via Objective-C I just found some nice tutorials inside Apple's Developer Library - http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/OOP_ObjC/Introduction/Introduction.html

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database before issuing the PACK command.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content.  After several rounds of testing CRUD, the output DBF was huge. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database (and re-open) before issuing the PACK command or you will get the "File is in use" error.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • User generated articles, how to do meta description?

    - by Tom Gullen
    If users submit a lot of good quality articles on the site, what is the best way to approach the meta description tag? I see two options: Have a description box and rely on them to fill it sensibly and in a good quality way Just exclude the meta description Method 1 is bad initially, but I'm willing to put time in going through and editing/checking all of them on a permanent basis. Method 2 is employed by the stack exchange site, and lets the search bots extract the best part of the page in the SERP. Thoughts? Ideas? I'm thinking a badly formed description tag is more damaging than not having one at all at the end of the day. I don't expect content to ever become unwieldy and too much to manage.

    Read the article

  • How do I install 32-bit perl-tgicl 2.1.1?

    - by Daniel Fernandez
    I'm trying to install a .deb and I need some packages but they are not on the synaptic. How can I install this packages lib32z1 libc6-i386 TGICL$ sudo dpkg -i perl-tgicl_2.1-1_all.deb Selecting previously deselected package perl-tgicl. (Reading database ... 168515 files and directories currently installed.) Unpacking perl-tgicl (from perl-tgicl_2.1-1_all.deb) ... dpkg: dependency problems prevent configuration of perl-tgicl: perl-tgicl depends on lib32z1 (>= 1:1.1.4); however: Package lib32z1 is not installed. perl-tgicl depends on libc6-i386 (>= 2.3); however: Package libc6-i386 is not installed. perl-tgicl depends on libfile-homedir-perl (>= 0.10); however: Package libfile-homedir-perl is not installed. perl-tgicl depends on libfile-spec-perl (>= 0.10); however: Package libfile-spec-perl is not installed. dpkg: error processing perl-tgicl (--install): dependency problems - leaving unconfigured Processing triggers for man-db ... Errors were encountered while processing: perl-tgicl My OS: $ uname -a Linux 3.0.0-12-generic-pae #20-Ubuntu SMP Fri Oct 7 16:37:17 UTC 2011 i686 i686 i386 GNU/Linux

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >