Search Results

Search found 5318 results on 213 pages for 'managed c'.

Page 165/213 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • DVCS with a Windows central repository

    - by Mikko Rantanen
    We are currently using VSS for version control. Quite few of our developers are interested in a distributed model (And want to get rid of VSS). Our network is full of Windows machines and while our IT department has experience maintaining Linux machines they would prefer not to. What DVCS systems can host their central repository on Windows while providing.. Push access to the repository. Basic authentication. Mostly just a way to allow or deny access to the whole repository. No need for fine grained access. Server process so users don't need write right to the repository reducing the risk of accidentally messing with it. On the client side a GUI such as Tortoise would be more or less a requirement (Sorry, Windows shell sucks. :|). Ease of installation would be a huge plus as our IT department is already quite low on resources. And using windows credentials for authentication would be an advantage but not a requirement as long as the client is able to store the credentials. I have had a (really) quick look at Git, Mercurial and Bazaar. Git seemed to use ssh or simple WebDAV for repository access, requiring write permission for the users. Mercurial had a built in http server, but this seemed to be only for pull purposes. Update: Mercurial supports push as well. Bazaar Seemed to use sftp for repository access, again requiring a write permission for the users. Are there windows server processes for any DVCS systems and has anyone managed to set one up in a Windows land? And apologies if this is a duplicate question. I couldn't find one. Update Got Mercurial working for push purposes! Detailed list what was required can be found as an answer below.

    Read the article

  • rake task via cron problem loading rubygems

    - by Matenia Rossides
    I have managed to get a cron job to run a rake task by doing the following: cd /home/myusername/approotlocation/ && /usr/bin/rake sendnewsletter RAILS_ENV=development i have checked with which ruby and which rake to make sure the paths are correct (from bash) the job looks like it wants to run as i get the following email from the cron daemon when it completes Missing these required gems: chronic whenever searchlogic adzap-ar_mailer twitter gdata bitly ruby-recaptcha You're running: ruby 1.8.7.22 at /usr/bin/ruby rubygems 1.3.5 at /home/myusername/gems, /usr/lib/ruby/gems/1.8 Run `rake gems:install` to install the missing gems. (in /home/myusername/approotlocation) my custom rake file within lib/tasks is as follows: task :sendnewsletter => :environment do require 'rubygems' require 'chronic' require 'whenever' require 'searchlogic' require 'adzap-ar_mailer' require 'twitter' require 'gdata' require 'bitly' require 'ruby-recaptcha' @recipients = Subscription.all(:conditions => {:active => true}) for user in @recipients Email.send_later(:deliver_send_newsletter,user) end end with or without the require items, it still gives me the same error ... can anyone shed some light on this? or alternatively advise me on how to make a custom file within the script directory that will run this function (I already have a cron job working that will run and process all my delayed_jobs. Cheers!

    Read the article

  • How to set UCS2 in numpy?

    - by mindcorrosive
    I'm trying to build numpy 1.2.1 as a module for a third-party python interpreter (custom-built, py2.4 linux x86_64) so that I can make calls to numpy from within it. Let's call this one interpreter A. The thing is, the system-wide python interpreter (also py2.4, let's call it B) from the vendor is built with --enable-unicode=ucs4, while the custom one is with UCS2. Needless to say, when I try to build a module with B, I get an error when I try to import numpy in A -- it complains about undefined symbol _PyUnicodeUCS4_IsWhiteSpace. I've searched around and apparently there's no way around this but to compile a custom Python interpreter -- which I did (let's call it interpreter C), properly specifying the unicode string length (verifiable through sys.maxunicode). I managed to build numpy with C as well, surprisingly enough, but still the problem persists when I try to import it in interpreter C. Previously, when I built numpy using B, there were no problems when importing it in B, but A would complain. Perhaps there's an option when building numpy to specify the length of Unicode strings to be used, as when configuring Python builds? Or am I doing something else wrong? A few notes: Upgrading to newer versions of python and/or numpy is not an option - interpreter A will stay on this version of the grammar for the foreseeable future. Also, it is not possible to start the interpreter A in standalone mode to build numpy with it, as it needs some other libraries preloaded I know that this whole thing is a mess, but I'd appreciate any help I can get to make this work. If you need more information, please let me know, I'd be happy to oblige. Thanks to everybody for their time in advance.

    Read the article

  • Please help us non-C++ developers understand what RAII is

    - by Charlie Flowers
    Another question I thought for sure would have been asked before, but I don't see it in the "Related Questions" list. Could you C++ developers please give us a good description of what RAII is, why it is important, and whether or not it might have any relevance to other languages? I do know a little bit. I believe it stands for "Resource Acquisition is Initialization". However, that name doesn't jive with my (possibly incorrect) understanding of what RAII is: I get the impression that RAII is a way of initializing objects on the stack such that, when those variables go out of scope, the destructors will automatically be called causing the resources to be cleaned up. So why isn't that called "using the stack to trigger cleanup" (UTSTTC:)? How do you get from there to "RAII"? And how can you make something on the stack that will cause the cleanup of something that lives on the heap? Also, are there cases where you can't use RAII? Do you ever find yourself wishing for garbage collection? At least a garbage collector you could use for some objects while letting others be managed? Thanks.

    Read the article

  • how to read an arraylist from a txt file in java?

    - by lox
    how to read an arraylist from a txt file in java? my arraylist is the form of: public class Account { String username; String password; } i managed to put some "Accounts" in the a txt file, but now i don't know how to read them. this is how my arraylist look in the txt file: username1 password1 | username2 password2 | etc this is a part of the code i came up with, but it doesn't work. it looks logic to me though... :) . public static void RdAc(String args[]) { ArrayList<Account> peoplelist = new ArrayList<Account>(50); int i,i2,i3; String[] theword = null; try { FileReader fr = new FileReader("myfile.txt"); BufferedReader br = new BufferedReader(fr); String line = ""; while ((line = br.readLine()) != null) { String[] theline = line.split(" | "); for (i = 0; i < theline.length; i++) { theword = theline[i].split(" "); } for(i3=0;i3<theline.length;i3++) { Account people = new Account(); for (i2 = 0; i2 < theword.length; i2++) { people.username = theword[i2]; people.password = theword[i2+1]; peoplelist.add(people); } } } } catch (IOException ex) { System.out.println("Could not read from file"); } }

    Read the article

  • Sending basic authentication information via form

    - by VolatileStorm
    I am working on a site that currently uses a basic authentication dialog box login system, that is the type of dialog that you get if you go here: http://www.dur.ac.uk/vm.boatclub/password/index.php I did not set this system up and am not in a position to easily/quickly work around it, but it DOES work. The issue however is that the dialog box is not very helpful in telling you what login information you have to use (that is which username and password combination), and so I would like to replace it with a form. I had been thinking that this wasn't possible but I wanted to ask in order to find out. Is it possible to set up an HTML form that sends the data to the server such that it accepts it in the same way that it would using this dialog box? Alternatively is it possible to set up a PHP script that would take normal form data and process it somehow passing it to the server such that it logs in? Edit: After being told that this is basic authentication I went around and have managed to find a way that works and keeps the user persistently logged in. However, this does not work in internet explorer. The solution was simply to redirect the user to: http://username:[email protected]/vm.boatclub/password/index.php But Internet Explorer removed it due to phishing uses about 3 years ago. Is there a way to use javascript to get the browser to access the site in this way? Or will I have to simply change my UI?

    Read the article

  • In WMI, can I use a join (or something similar) to acquire the IisWebServer object for a site, given

    - by Precipitous
    Given a server name and a physical path, I'd like to be able to hunt down the IISWebServer object and ApplicationPool. Website url is also an acceptable input. Our technologies are IIS 6, WMI, and access via C# or Powershell 2. I'm certain this would be easier with IIS 7 its managed API. We don't have that yet. Here's what I can do: Get a list of IIS virtual directories from IISWebVirtualDirSetting and filter (offline) for the matching physical path. $theVirtualDir = gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername -authentication PacketPrivacy ` -class "IISWebVirtualDirSetting" ` | where-object {$_.Path -like $deployLocation} From the virtual directory object, I can get a name (like W3SVC/40565456/root). Given this name, I can get to other goodies, such as the IIS web server object. gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername ` -authentication PacketPrivacy ` -Query "SELECT * FROM IisWebServer WHERE Name='W3SVC/40589473'" The questions, restated: 1) This is a query language. Can I join or subquery so that 1 WMI query statement gets web servers based on IISWebVirtualDir.Path? How? 2) In solving 1, you'll have to explain how to query on the Path property. Why is this an invalid query? "SELECT * FROM IISWebVirtualDirSetting WHERE Path='D:\sites\globaldominator'"

    Read the article

  • Delete specific files after installation using visual studio setup project

    - by Vadiklk
    I have this problem. I want to build an installer for my c# solution, that will be placed in a folder with other installation folders and files that are needed to be copied to the installed folder. So that is easy, I just copy them to the folder I create using the folder structure I want. Now, I want also to install another program and run a .exe file I've created to unzip some files for me. For that I need to copy 2 .exe files and 2 dlls (for the exes) to the folder to which I am installing and create 2 custom actions that will use them. That I've managed to do. After that I want to delete those 4 extra files, as the user does not need them and shouldn't even be aware they are there. How to do so? I couldn't find a way in the built in setup project preferences + I do not know how to make a custom installer class. A bonus question, is how to make the other installer (one of the .exe files is just a plain installer) install quietly to any path? I do not want the user to see an installer pop out of my program installer. Thanks!

    Read the article

  • Need to call original function from detoured function

    - by peachykeen
    I'm using Detours to hook into an executable's message function, but I need to run my own code and then call the original code. From what I've seen in the Detours docs, it definitely sounds like that should happen automatically. The original function prints a message to the screen, but as soon as I attach a detour it starts running my code and stops printing. The original function code is roughly: void CGuiObject::AppendMsgToBuffer(classA, unsigned long, unsigned long, int, classB); My function is: void CGuiObject_AppendMsgToBuffer( [same params, with names] ); I know the memory position the original function resides in, so using: DWORD OrigPos = 0x0040592C; DetourAttach( (void*)OrigPos, CGuiObject_AppendMsgToBuffer); gets me into the function. This code works almost perfectly: my function is called with the proper parameters. However, execution leaves my function and the original code is not called. I've tried jmping back in, but that crashes the program (I'm assuming the code Detours moved to fit the hook is responsible for the crash). Edit: I've managed to fix the first issue, with no returning to program execution. By calling the OrigPos value as a function, I'm able to go to the "trampoline" function and from there on to the original code. However, somewhere along the lines the registers are changing and that is causing the program to crash with a segfault as soon as I get back into the original code.

    Read the article

  • Cutting Ubuntu to the bone for Virtualbox VM

    - by Monty
    I've been looking around for a Linux variant which will install only the software I need rather than everything Ubuntu (for example) puts in by default. This is to create a virtual machine in Virtualbox which has bash, apache, python, perl, SQLite, openssh and a few other programs but nothing else. I'd prefer to go with Ubuntu if possible but another modern distro would do as well (I like using apt-get and yum rather than downloading/compiling etc). So far, I've tried: SuseStudio.com, which is probably the best so far. Pressing F4 to get the boot options on Ubuntu 9.10, but there is no minimal installation (I think there was once). Arch Linux, slightly confusing install procedure but I might go back and try again. Gentoo, started well but fairly soon the HD on the virtual machine went to 2Gb, even before the installation had started in earnest (I'd partitioned the disks is all). I realise there are various "small" Linuxes around like Puppy, Feather, DSL, etc, but they seem to be aimed at desktop users or as a techie's toolkit, and I want a small-as-possible server distro which can be managed with tools like apt or yum or similar. TIA for any advice you can offer! -- Monty

    Read the article

  • Viewing namespaced global variables in Visual Studio debugger?

    - by Chris
    When debugging a non-managed C++ project in Visual Studio 2008, I occasionally want to see the value of a global variable. We don't have a lot of these but those that are there all declared within a namespace called 'global'. e.g. namespace global { int foo; bool bar; ... } The problem is that when the code is stopped at a breakpoint, the default debugging tooltip (from pointing at the variable name) and quickwatch (shift-f9 on the variable name) don't take the namespace into consideration and hence won't work. So for example I can point at 'foo' and nothing comes up. If I shift-f9 on foo, it will bring up the quickwatch, which then says 'CXX0017: Error: symbol "foo" not found'. I can get around that by manually editing the variable name in the quickwatch window to prefix it with "global::" (which is cumbersome considering you have to do it each time you want to quickwatch), but there is no fix for the tooltip that I can work out. Setting the 'default namespace' of the project properties doesn't help. How can I tell the VS debugger to use the namespace that it already knows the variable is declared in (since it has the declaration right there), or, alternatively, tell it a default namespace to look for variables in if it doesn't find them? My google-fu has failed to find an answer. This report lists the same problem, with MS saying it's "by design", but even so I am hoping there is some way to work around it (perhaps with clever use of autoexp.dat?)

    Read the article

  • Using the Microsoft Ajax Minifier with Web Setup project & Source Control

    - by Rob
    I've just started investigating the Microsoft Ajax Minifer 4.0 for use with a Visual Studio 2008 Web Application I work on. It's proven easy enough to hook it into the .csproj file so it produced .min.js files for all scripts, however I'm stumped as to how to integrate this with the Web Setup project & Source Control. Essentially what I want to do is have the resultant .min.js files included in the Web Setup project without having them included in Source Control because: Having to check them out prior to the build being executing is a pain (the minifier cannot modify them if they're not checked out). As they're created as a "build artifact" it just seems wrong to have them stored under source control. The only option I've managed to come across so far is to explicitly include the .min.js files as part of the Setup project by right clicking on the Web Setup project and choosing "Add File", and then having the relevant folder hierarchy duplicated in "File System on Target Machine" so that I can force the file to the correct location. This is neither elegant or simple/robust as: It requires me to manually add every minified js file to the Web Setup project by hand Maintain a copy of the relevant directory structure in both the Web Application project and the Web Setup project Remember to add any new js files minified versions to the Web Setup project Is there a better way of doing this?

    Read the article

  • NSUndoManager, Core Data and selective undo/redo

    - by Combat
    I'm working on a core data application that has a rather large hierarchy of managed objects similar to a tree. When a base object is created, it creates a few child objects which in turn create their own child objects and so on. Each of these child objects may gather information using NSURLConnections. Now, I'd like to support undo/redo with the undoManager in the managedObjectContext. The problem is, if a user creates a base object, then tries to undo that action, the base object is not removed. Instead, one or more of the child objects may be removed. Obviously this type of action is unpredictable and unwanted. So I tried disabling undo registration by default. I did this by calling disableUndoRegistration: before anything is modified in the managedObjectContext. Then, enabling undo registration before base operations such as creating a base object the again re-disabling registrations afterwords. Now when i try to undo, I get this error: undo: NSUndoManager 0x1026428b0 is in invalid state, undo was called with too many nested undo groups Thoughts?

    Read the article

  • Why aren't my objects sorting with sortedArrayUsingDescriptors?

    - by clozach
    I expected the code below to return the objects in imageSet as a sorted array. Instead, there's no difference in the ordering before and after. NSSortDescriptor *descriptor = [[NSSortDescriptor alloc] initWithKey:@"imageID" ascending:YES]; NSSet *imageSet = collection.images; for (CBImage *image in imageSet) { NSLog(@"imageID in Set: %@",image.imageID); } NSArray *imageArray = [[imageSet allObjects] sortedArrayUsingDescriptors:(descriptor, nil)]; [descriptor release]; for (CBImage *image in imageArray) { NSLog(@"imageID in Array: %@",image.imageID); } Fwiw, CBImage is defined in my core data model. I don't know why sorting on managed objects would work any differently than on "regular" objects, but maybe it matters. As proof that @"imageID" should work as the key for the descriptor, here's what the two log loops above output for one of the sets I'm iterating through: 2010-05-05 00:49:52.876 Cover Browser[38678:207] imageID in Array: 360339 2010-05-05 00:49:52.876 Cover Browser[38678:207] imageID in Array: 360337 2010-05-05 00:49:52.877 Cover Browser[38678:207] imageID in Array: 360338 2010-05-05 00:49:52.878 Cover Browser[38678:207] imageID in Array: 360336 2010-05-05 00:49:52.879 Cover Browser[38678:207] imageID in Array: 360335 ... For extra credit, I'd love to get a general solution to troubleshooting NSSortDescriptor troubles (esp. if it also applies to troubleshooting NSPredicate). The functionality of these things seems totally opaque to me and consequently debugging takes forever.

    Read the article

  • Transfer data between C++ classes efficiently

    - by David
    Hi, Need help... I have 3 classes, Manager which holds 2 pointers. One to class A another to class B . A does not know about B and vise versa. A does some calculations and at the end it puts 3 floats into the clipboard. Next, B pulls from clipboard the 3 floats, and does it's own calculations. This loop is managed by the Manager and repeats many times (iteration after iteration). My problem: Now class A produces a vector of floats which class B needs. This vector can have more than 1000 values and I don't want to use the clipboard to transfer it to B as it will become time consumer, even bottleneck, since this behavior repeats step by step. Simple solution is that B will know A (set a pointer to A). Other one is to transfer a pointer to the vector via Manager But I'm looking for something different, more object oriented that won't break the existent separation between A and B Any ideas ? Many thanks David

    Read the article

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • Create a new app pool and assign it to a site subfolder on a remote host, using C# and IIS7

    - by Soeren
    I have a web site running on IIS7 on a remote server. I would like to do the following: Create a new subfolder under the root virtual directory. Create a new app pool. Add this new app pool to the new subfolder Normally, I would do this manually in IIS by first creating the app pool, and then right-clicking the sub folder an choose "add application", but I need to do this programmatically in C#. I've managed to make the above points 1 and 2 work, but I can't find the way to adding the application to the sub folder. This is the code I have used so far for 1 and 2: ServerManager mgr = new ServerManager(); ApplicationPool myAppPool = mgr.ApplicationPools.Add("MyAppPool"); myAppPool.AutoStart = true; myAppPool.Cpu.Action = ProcessorAction.KillW3wp; myAppPool.ManagedPipelineMode = ManagedPipelineMode.Integrated; myAppPool.ManagedRuntimeVersion = "V4.0"; myAppPool.ProcessModel.IdentityType = ProcessModelIdentityType.NetworkService; mgr.CommitChanges(); if (!Directory.Exists(@"D:\webroot\TestSite\NytSite")) { Directory.CreateDirectory(@"D:\webroot\TestSite\NytSite"); } So, I need to add "MyAppPool" to the "NytSite" folder... Is this even the correct way to do this? Any experiences out there? Thnx

    Read the article

  • How to manipulate a header and then continue with it in C#?

    - by Simon Linder
    Hi all, I want to replace an old ISAPI filter that ran on IIS6. This filter checks if the request is of a special kind, then manipulates the header and continues with the request. Two headers are added in the manipulating method that I need for calling another special ISAPI module. So I have ISAPI C++ code like: DWORD OnPreProc(HTTP_FILTER_CONTEXT *pfc, HTTP_FILTER_PREPROC_HEADERS *pHeaders) { if (ManipulateHeaderInSomeWay(pfc, pHeaders)) { return SF_STATUS_REQ_NEXT_NOTIFICATION; } return SF_STATUS_REQ_FINISHED; } I now want to rewrite this ISAPI filter as a managed module for the IIS7. So I have something like this: private void OnMapRequestHandler(HttpContext context) { ManipulateHeaderInSomeWay(context); } And now what? The request seems not to do what it should? I already wrote an IIS7 native module that implements the same method. But this method has a return value with which I can tell what to do next: REQUEST_NOTIFICATION_STATUS CMyModule::OnMapRequestHandler(IN IHttpContext *pHttpContext, OUT IMapHandlerProvider *pProvider) { if (DoSomething(pHttpContext)) { return RQ_NOTIFICATION_CONTINUE; } return RQ_NOTIFICATION_FINISH_REQUEST; } So is there a way to send my manipulated context again?

    Read the article

  • How can I detect if this dictionary key exists in C#?

    - by Adam Tuttle
    I am working with the Exchange Web Services Managed API, with contact data. I have the following code, which is functional, but not ideal: foreach (Contact c in contactList) { string openItemUrl = "https://" + service.Url.Host + "/owa/" + c.WebClientReadFormQueryString; row = table.NewRow(); row["FileAs"] = c.FileAs; row["GivenName"] = c.GivenName; row["Surname"] = c.Surname; row["CompanyName"] = c.CompanyName; row["Link"] = openItemUrl; //home address try { row["HomeStreet"] = c.PhysicalAddresses[PhysicalAddressKey.Home].Street.ToString(); } catch (Exception e) { } try { row["HomeCity"] = c.PhysicalAddresses[PhysicalAddressKey.Home].City.ToString(); } catch (Exception e) { } try { row["HomeState"] = c.PhysicalAddresses[PhysicalAddressKey.Home].State.ToString(); } catch (Exception e) { } try { row["HomeZip"] = c.PhysicalAddresses[PhysicalAddressKey.Home].PostalCode.ToString(); } catch (Exception e) { } try { row["HomeCountry"] = c.PhysicalAddresses[PhysicalAddressKey.Home].CountryOrRegion.ToString(); } catch (Exception e) { } //and so on for all kinds of other contact-related fields... } As I said, this code works. Now I want to make it suck a little less, if possible. I can't find any methods that allow me to check for the existence of the key in the dictionary before attempting to access it, and if I try to read it (with .ToString()) and it doesn't exist then an exception is thrown: 500 The given key was not present in the dictionary. How can I refactor this code to suck less (while still being functional)?

    Read the article

  • Removing elements using XSLT 1.0

    - by pmdarrow
    I'm attempting to remove Component elements from the XML below that have File children with the extension "config." I've managed to do this part, but I also need to remove the matching ComponentRef elements that have the same IDs as these Components. <Fragment> <DirectoryRef Id="MyWebsite"> <Component Id="Comp1"> <File Source="Web.config" /> </Component> <Component Id="Comp2"> <File Source="Default.aspx" /> </Component> </DirectoryRef> </Fragment> <Fragment> <ComponentGroup Id="MyWebsite"> <ComponentRef Id="Comp1" /> <ComponentRef Id="Comp2" /> </ComponentGroup> </Fragment> Based on other SO answers, I've come up with the following XSLT to remove these Component elements: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" /> <xsl:template match="Component[File[substring(@Source, string-length(@Source)- string-length('config') + 1) = 'config']]" /> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> Unfortunately, this doesn't remove the matching ComponentRef elements. The XSLT will remove the component with the Id "Comp1" but not the ComponentRef with Id "Comp1". How do I achieve this using XSLT 1.0?

    Read the article

  • iPhone Core Data does not refresh table

    - by Brian515
    Hi all, I'm trying to write an application with Core Data, and I have been able to successfully read and write to the core data database. However, if I write to the database in one view controller, my other view controllers will not see the change until the app is closed then reopened again. This is really frustrating. I'm not entirely sure how to get the refresh - (void)refreshObject:(NSManagedObject *)object mergeChanges:(BOOL)flag method to work. How do I get a reference to my managed object? Anyways, here's the code I'm using to read the data back. This is in the viewDidLoad method. NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Website" inManagedObjectContext:managedObjectContext]; [request setEntity:entity]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"siteName" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptor release]; [sortDescriptors release]; NSError *error = nil; NSMutableArray *mutableFetchResults = [[managedObjectContext executeFetchRequest:request error:&error] mutableCopy]; if(mutableFetchResults == nil) { //Handle the error } [self setNewsTitlesArray:mutableFetchResults]; [mutableFetchResults release]; [request release]; [newsSourcesTableView reloadData]; Thanks for help in advance!

    Read the article

  • Perl - Read XML

    - by chinna_82
    XML <?xml version='1.0'?> <employee> <name>Smith</name> <age>43</age> <sex>M</sex> <department role='manager'>Operations</department> </employee> Perl use XML::Simple; use Data::Dumper; $xml = new XML::Simple; foreach my $data1 ($data = $xml->XMLin("test.xml")) { print Dumper($data1); } Above code managed to all the xml value like this. Output $VAR1 = { 'department' => { 'content' => 'Operations', 'role' => 'manager' }, 'name' => 'John Doe', 'sex' => 'M', 'age' => '43' }; How do I do, if I only want to get the role value. For this example I need to get Role = manager. Any advice or reference link is highly appreciated.

    Read the article

  • How can I create a finger scrollable Textbox in WM 6.5?

    - by Papajohn
    Hi everybody. I just noticed something weird in WM 6.5 emulators. Unlike 6.1 where finger panning kind of worked, the only way to scroll a Textbox appears to be through scrollbars. This behaviour is in contrast to what they have done for comboboxes: they are now gesture-friendly without the programmer's intervention. I.e. the user can select a choice from a standard drop down menu by panning and scrolling. Previously, you had to use the embedded scrollbar. The combobox's case implies that MS took some measures to provide standard gesture support for classic finger gestures, yet I cannot see something similar for textboxes. This makes me ask the following: Is there anything that can be done to make textboxes finger scrollabe easily? Note that I refer to managed .NET CF development. It is my understanding that in native development I could use the new Gestures API to achieve the scrolling effect. Yet, I am not sure if there is an easier and more straightforward method that I have missed.

    Read the article

  • Creating a smart text generator

    - by royrules22
    I'm doing this for fun (or as 4chan says "for teh lolz") and if I learn something on the way all the better. I took an AI course almost 2 years ago now and I really enjoyed it but I managed to forget everything so this is a way to refresh that. Anyway I want to be able to generate text given a set of inputs. Basically this will read forum inputs (or maybe Twitter tweets) and then generate a comment based on the learning. Now the simplest way would be to use a Markov Chain Text Generator but I want something a little bit more complex than that as the MKC basically only learns by word order (which word is more likely to appear after word x given the input text). I'm trying to see if there's something I can do to make it a little bit more smarter. For example I want it to do something like this: Learn from a large selection of posts in a message board but don't weight it too much For each post: Learn from the other comments in that post and weigh these inputs higher Generate comment and post See what other users' reaction to your post was. If good weigh it positively so you make more posts that are similar to the one made, and vice versa if negative. It's the weighing and learning from mistakes part that I'm not sure how to implement. I thought about Artificial Neural Networks (mainly because I remember enjoying that chapter) but as far as I can tell that's mainly used to classify things (i.e. given a finite set of choices [x1...xn] which x is this given input) not really generate anything. I'm not even sure if this is possible or if it is what should I go about learning/figuring out. What algorithm is best suited for this? To those worried that I will use this as a bot to spam or provide bad answers to SO, I promise that I will not use this to provide (bad) advice or to spam for profit. I definitely will not post it's nonsensical thoughts on SO. I plan to use it for my own amusement. Thanks!

    Read the article

  • How to make images hosted on Amazon S3 less public but not completely private?

    - by Jay Godse
    I fired up a sample application that uses Amazon S3 for image hosting. I managed to coax it into working. The application is hosted at github.com. The application lets you create users with a profile photo. When you upload the photo, the web application stores it on Amazon S3 instead of your local file system. (Very important if you host at heroku.com) However, when I did a "view source" in the browser of the page I noticed that the URL of the picture was an Amazon S3 URL in the S3 bucket that I assigned to the app. I cut & pasted the URL and was able to view the picture in the same browser, and in in another browser in which I had no open sessions to my web app or to Amazon S3. Is there any way that I could restrict access to that URL (and image) so that it is accessible only to browsers that are logged into my applications? Most of the information I found about Amazon ACLs only talk about access for only the owner or to groups of users authenticated with Amazon or AmazonS3, or to everybody anonymously.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >