Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 398/465 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • iPhone contacts app styled indexed table view implementation

    - by KSH
    My Requirement: I have this straight forward requirement of listing names of people in alphabetical order in a Indexed table view with index titles being the starting letter of alphabets (additionally a search icon at the top and # to display misc values which start with a number and other special characters). What I have done so far: 1. I am using core data for storage and "last_name" is modelled as a String property in the Contacts entity 2.I am using a NSFetchedResultsController to display the sorted indexed table view. Issues accomplishing my requirement: 1. First up, I couldn't get the section index titles to be the first letter of alphabets. Dave's suggestion in the following post, helped me achieve the same: http://stackoverflow.com/questions/1112521/nsfetchedresultscontroller-with-sections-created-by-first-letter-of-a-string The only issue I encountered with Dave' suggestion is that I couldn't get the misc named grouped under "#" index. What I have tried: 1. I tried adding a custom compare method to NSString (category) to check how the comparison and section is made but that custom method doesn't get called when specified in the NSSortDescriptor selector. Here is some code: `@interface NSString (SortString) -(NSComparisonResult) customCompare: (NSString*) aStirng; @end @implementation NSString (SortString) -(NSComparisonResult) customCompare:(NSString *)aString { NSLog(@"Custom compare called to compare : %@ and %@",self,aString); return [self caseInsensitiveCompare:aString]; } @end` Code to fetch data: `NSArray *sortDescriptors = [NSArray arrayWithObject:[[[NSSortDescriptor alloc] initWithKey:@"last_name" ascending:YES selector:@selector(customCompare:)] autorelease]]; [fetchRequest setSortDescriptors:sortDescriptors]; fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"lastNameInitial" cacheName:@"MyCache"];` Can you let me know what I am missing and how the requirement can be accomplished ?

    Read the article

  • Publishing my Website to my Local Disk Causes Exceptions to show Paths including my Local Disk

    - by coffeeaddict
    I've published my website many times. But didn't think about this though until I came across this issue. So I decided to publish my WAP project to a local folder on my C drive first. Then used FTP to upload it to my shared host on discountasp.net. I noticed during runtime that the stack trace was referencing that local folder still and erroring out. Anyone know what config settings are affected when publishing? Obviously something is still pointing to my local C drive and I've searched my entire solution and don't see why. Here's the runtime error I get when my code tries to run in discountasp.net's web server Cannot write into the public directory - check permissions Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: ScrewTurn.Wiki.PluginFramework.InvalidConfigurationException: Cannot write into the public directory - check permissions Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidConfigurationException: Cannot write into the public directory - check permissions] ScrewTurn.Wiki.SettingsStorageProvider.Init(IHostV30 host, String config) in C:\www\Wiki\Screwturn3_0_2_509\Core\SettingsStorageProvider.cs:90 ScrewTurn.Wiki.StartupTools.Startup() in C:\www\Wiki\Screwturn3_0_2_509\Core\StartupTools.cs:69 ScrewTurn.Wiki.Global.Application_BeginRequest(Object sender, EventArgs e) in C:\www\Wiki\Screwturn3_0_2_509\WebApplication\Global.asax.cs:29 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75 Discountasp says it's not a permission issue but obviously it is. I think /Wiki should work...but it's not. Here's my site viewed in FTP on discountasp.net's server:

    Read the article

  • Intern working for Indian NGO - Help with PHP 4, advising staff

    - by Kevin Burke
    Hello, For the past three months I've been working for an Indian NGO (http://sevamandir.org), doing some volunteer work in the field but also trying to improve their website, which needs a ton of work. Recently I've been trying to fix the "subscribe to newsletter" button, which is broken. I used filter_var to filter the email input, but when I tried to test this out I got an error. Then I learned that the web host is still using php version 4.3.2 and register_globals is turned on. I've mentioned that they should upgrade their web host before (they are paying around $50 per year for Rediff Web Hosting, complete with 100MB storage and 1 MySQL database). That would add a lot of complexity for the IT staff of 3, who would have to update everyone's email information (I assume? this is a 250-person organization), and have me find a new web host and teach them about it. The staff isn't that sophisticated about web usage - the head guy still uses IE6, and the website's laid out in tables (they use Dreamweaver WYSIWYG to lay out pages). So I've got two options - use regular expressions to filter the email, which I'm not that skilled at doing (and would be more vulnerable to exploitation after I leave), turn off register globals and then try to teach the staff what I'm doing, or try to get them to upgrade their versions of PHP and MySQL and/or change web host. I'd appreciate some advice. Thanks for your help, Kevin

    Read the article

  • Add Hexidecimal Header Info to JPEG File Using Java

    - by jboyd
    I need to add header info to a JPEG file in order to get it to work properly when shared on some websites, I've tracked down the correct info through a lot of Hex digging, but now I'm kind of stuck trying to get it into the file. I know where in the file it needs to go, and I know how long it is, my problem is that RandomAccessFile just overwrites existing data in the file and FileOutputStream appends the data to the end. I don't want either, I want to INSERT data starting at the third byte. My example code: File fileToChange = new File("someimage.jpg"); byte[] i = new byte[2]; i[0] = (byte)Integer.decode("0xcc"); i[1] = (byte)Integer.decode("0xcc"); RandomAccessFile f = new RandomAccessFile(new File("videothing.jpg"), "rw"); long aPositionWhereIWantToGo = 2; f.seek(aPositionWhereIWantToGo); // this basically reads n bytes in the file f.write((byte[])i); f.close(); So this doesn't work because it overwrites, and does not insert, I can't find any way to just insert data into a file

    Read the article

  • SQL Server 2005 Reporting Services and the Report Viewer

    - by Kendra
    I am having an issue embedding my report into an aspx page. Here's my setup: 1 Server running SQL Server 2005 and SQL Server 2005 Reporting Services 1 Workstation running XP and VS 2005 The server is not on a domain. Reporting Services is a default installation. I have one report called TestMe in a folder called TestReports using a shared datasource. If I view the report in Report Manager, it renders fine. If I view the report using the http ://myserver/reportserver url it renders fine. If I view the report using the http ://myserver/reportserver?/TestReports/TestMe it renders fine. If I try to view the report using http ://myserver/reportserver/TestReports/TestMe, it just goes to the folder navigation page of the home directory. My web application is impersonating somebody specific to get around the server not being on a domain. When I call the report from the report viewer using http ://myserver/reportserver as the server and /TestReports/TestMe as the path I get this error: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method. When I change the server to http ://myserver/reportserver? I get this error when I run the report: Client found response content type of '', but expected 'text/xml'. The request failed with an empty response. I have been searching for a while and haven't found anything that fixes my issue. Please let me know if there is more information needed. Thanks in advance, Kendra

    Read the article

  • Why does perl crash with "*** glibc detected *** perl: munmap_chunk(): invalid pointer"?

    - by sid_com
    #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML::Reader; my $reader = XML::LibXML::Reader->new( location => 'http://www.heise.de/' ) or die $!; while ( $reader->read ) { say $reader->name; } At the end of the output from this script I get this error-messages: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl ... Is this due a bug? perl -V: Summary of my perl5 (revision 5 version 12 subversion 0) configuration: Platform: osname=linux, osvers=2.6.31.12-0.2-desktop, archname=x86_64-linux uname='linux linux1 2.6.31.12-0.2-desktop #1 smp preempt 2010-03-16 21:25:39 +0100 x86_64 x86_64 x86_64 gnulinux ' config_args='-Dnoextensions=ODBM_File' hint=recommended, useposix=true, d_sigaction=define useithreads=undef, usemultiplicity=undef useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef use64bitint=define, use64bitall=define, uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='cc', ccflags ='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64', optimize='-O2', cppflags='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include' ccversion='', gccversion='4.4.1 [gcc-4_4-branch revision 150839]', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='cc', ldflags =' -fstack-protector -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64 libs=-lnsl -ldl -lm -lcrypt -lutil -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc libc=/lib/libc-2.10.1.so, so=so, useshrplib=false, libperl=libperl.a gnulibc_version='2.10.1' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E' cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector' Characteristics of this binary (from libperl): Compile-time options: PERL_DONT_CREATE_GVSV PERL_MALLOC_WRAP USE_64_BIT_ALL USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO USE_PERL_ATOF Built under linux Compiled at Apr 15 2010 13:25:46 @INC: /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.12.0 /usr/local/lib/perl5/5.12.0/x86_64-linux /usr/local/lib/perl5/5.12.0 .

    Read the article

  • Proof of library bug vs developer side application bug

    - by Paralife
    I have a problem with a specific java client library. I wont say here the problem or the name of the library because my question is a different one. Here is the situation: I have made a program that uses the library. The program is a class named 'WorkerThread' that extends Thread. To start it I have made a Main class that only contains a main() function that starts the thread and nothing else. The worker uses the library to perform comm with a server and get results. The problem appears when I want to run 2 WorkerThreads simultaneously. What I first did was to do this in the Main class: public class Main { public static void main(String args[]) { new WorkerThread().start(); // 1st thread. new WorkerThread().start(); // 2nd thread. } } When I run this, both threads produce irrational results and what is more , some results that should be received by 1st thread are received by the 2nd instead. If instead of the above, I just run 2 separate processes of one thread each, then everything works fine. Also: 1.There is no static class or method used inside WorkerThread that could cause the problem. My application consists of only the worker thread class and contains no static fields or methods 2.The library is supposed to be usable in a multithreaded environment. In my thread I just create a new instance of a library's class and then call methods on it. Nothing more. My question is this: Without knowing any details of my implementation, is the above situation and facts enough to prove that there is a bug in the library and not in my programm? Is it safe to assume that the library inside uses a static method or object that is indirectly shared by my 2 threads and this causes the problem? If no then in what hypothetical situation could the bug originate in the worker class code?

    Read the article

  • MVC View Model Intellisense / Compile error

    - by Marty Trenouth
    I have one Library with my ORM and am working with a MVC Application. I have a problem where the pages won't compile because the Views can't see the Model's properties (which are inherited from lower level base classes). They system throws a compile error saying that 'object' does not contain a definition for 'ID' and no extension method 'ID' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?) implying that the View is not seeing the model. In the Controller I have full access to the Model and have check the Inherits from portion of the view to validate the correct type is being passed. Controller: return View(new TeraViral_Blog()); View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<com.models.TeraViral_Blog>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index2 </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Index2</h2> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> </fieldset> </asp:Content>

    Read the article

  • .htaccess equivalent of baseurl?

    - by Ryan
    Hello, I'm trying to install Symfony on a shared server and am attempting to duplicate the httpd.conf command: # Be sure to only have this line once in your configuration NameVirtualHost 127.0.0.1:8080 # This is the configuration for your project Listen 127.0.0.1:8080 <VirtualHost 127.0.0.1:8080> DocumentRoot "/home/sfproject/web" DirectoryIndex index.php <Directory "/home/sfproject/web"> AllowOverride All Allow from All </Directory> Alias /sf /home/sfproject/lib/vendor/symfony/data/web/sf <Directory "/home/sfproject/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> I need to do so using .htaccess The redirect portion was done in the root using the following: Options +FollowSymlinks RewriteEngine on rewritecond %{http_host} ^symfony.mysite.ca [nc] rewriterule ^(.*)$ http://symfony.mysite.ca/web/$1 [r=301,nc] For the alias, I've tried using: RewriteBase / but no success. The main issue is that the index.php file residing in the /web folder uses the /web path in its path to images and scripts. So instead of href="/css/main.css" //this would work it uses href="/web/css/main.css" //this doesn't work, already in the /web/ directory! Any ideas? Thanks!

    Read the article

  • When I add a database table to a DBML file via LINQ to SQL, I get a slew of compiler errors.

    - by Zian Choy
    Whenever I add a certain table to a DBML file via LINQ to SQL, I get 102 errors in my VB NET project. Some of the errors: Error 1 Attribute 'TableAttribute' cannot be applied multiple times. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 74 2 EMS Reality Check Error 2 'emptyChangingEventArgs' is already declared as 'Private Shared emptyChangingEventArgs As System.ComponentModel.PropertyChangingEventArgs' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 78 17 EMS Reality Check Error 3 '_GroupID' is already declared as 'Private _GroupID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 80 10 EMS Reality Check Error 4 '_ID' is already declared as 'Private _ID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 82 10 EMS Reality Check Any suggestions for getting the table to work with LINQ to SQL will be welcomed. The table's properties: Group ID ID (Primary Key) Contact Title UseGroupAddress InternationalFormat Address1 Address2 City State ZipCode Country Phone Fax EMailAddress Notes DateAdded AddedBy DateChanged ChangedBy Active ExternalReference ChangeCounter PhoneLabel FaxLabel

    Read the article

  • Common JNDI resources in Tomcat

    - by Lehane
    Hi, I’m running a couple of servlet applications in Tomcat (5.5). All of the servlets use a common factory resource that is shared out using JNDI. At the moment, I can get everything working by including the factory resource as a GlobalNamingResource in the /conf/server.xml file, and then having each servlet’s META-INF/context.xml file include a ResourceLink to the resource. Snippets from the XML files are included below. NOTE: I’m not that familiar with tomcat, so I’m not saying that this is a good configuration!!! However, I now want to be able install these servlets into multiple tomcat instances automatically using an RPM. The RPM will firstly copy the WARs to the webapps directory, and the jars for the factory into the common/lib directory (which is fine). But it will also need to make sure that the factory resource is included as a resource for all of the servlets. What is the best way add the resource globally? I’m not too keen on writing a script that goes into the server.xml file and adds in the resource that way. Is there any way for me to add in multiple server.xml files so that I can write a new server-app.xml file and it will concatenate my settings to server.xml? Or, better still to add this JNDI resource to all the servlets without using server.xml at all? p.s. Restarting the server will not be an issue, so I don’t mind if the changes don’t get picked up automatically. Thanks Snippet from server.xml <!-- Global JNDI resources --> <GlobalNamingResources> <Resource name="bean/MyFactory" auth="Container" type="com.somewhere.Connection" factory="com.somewhere.MyFactory"/> </GlobalNamingResources> The entire servlet’s META-INF/context.xml file <?xml version="1.0" encoding="UTF-8"?> <Context> <ResourceLink global="bean/MyFactory" name="bean/MyFactory" type="com.somewhere.MyFactory"/> </Context>

    Read the article

  • Strage character encoding problem with Eclipse / Spring / Tomcat 6

    - by Czar
    Hi, I have been trying things all da but can't get a proper solution. My problem is: I am developing a Spring MVC based app in my local Tomcat. My MYSQl database has UTF-8 encoding set, all content in there displays properly when using phpMyAdmin. Also the output in LOG files using log4j in catalina.out works fine. My JSP pages are configured by <!-- encoding --> <%@ page contentType="text/html; charset=UTF-8" %> <%@ page pageEncoding="UTF-8" %> Also showing data on my JSP works fine. I can also send data from my Controller without any DB intereference using special chars, e.g. String str = "UTF-8 Test: Ä Ö Ü ß è é â"; logger.debug(str); mav.addObject("utftest", str); That displays correctly in log and on jsp page in browser. BUT: When having special chars directly in my JSP file, e.g. for text in headers, this does not work. FF and Google Chrome display strange chars but report the page to be UTF-8. When switching to Latin, the chars just get more and more strange. Same problem when showing text tokens from my messages.properties file, although Eclipse says when right-clicking that UTF-8 will be used. I am a little it lost and don't know where to check now. Summary: DB storage is fine DB output on JSP is fine Output on JSP directly form controller is fine even reading in form forms is fine .properties files and JSP text is not fine !!! Any ideas? I really appreciate and tips.

    Read the article

  • Which Perl modules can be installed just by copying lib files?

    - by elliot100
    I'm an absolute beginner at Perl, and am trying to use some non-core modules on my shared Linux web host. I have no command line access, only FTP. Host admins will consider installing modules on request, but the ones I want to use are updated frequently (DateTime::TimeZone for example), and I'd prefer to have control over exactly which version I'm using. By experimentation, I've found some modules can be installed by copying files from the module's lib directory to a directory on the host, and using use lib "local_path"; in my script, i.e. no compiling is required to install (DateTime and DateTime::TimeZone again). How can I tell whether this is the case for a particular module? I realise I'll have to resolve dependencies myself. Additionally: if I wanted to be able to install any module, including those which require compiling, what would I be looking for in terms of hosting? I'm guessing at the moment I share a VM with several others and the minimum provision I'd need would be a dedicated VM with shell access?

    Read the article

  • Cocoa Core data: cannot save Created Items in NSTableview

    - by Paul Rostorp
    Hello, I'm am a beginner in mac os x development and am trying to get started with all this. Here is my problem : I've create a non-document based cocoa app using core data as storage. I've added an entity and attributes to the xdatamodel. In IB i've created an NSArrayController and linked it properly. I've created an nstableview binded to the nsarraycontroller. Next I added a button linked to nsarraycontroller with the " add: " method. When I try it out, I can add and edit the items in the table. Here comes the problem: Core data is supposed to save everything automatically, but to make sure i linked the "save" button in the menu to the appdelegate and to the " file's owner" , first responder, application... everything possible ( with both " save :" and " saveaction:" methods ). And still it doesn't save when clicking save: when I restart the cell created ( and renamed ) are gone. And also, I didn't even edit the source code yet; core data for such simple tasks is supposed to only need Interface builder. Please help me for this, I haven't found any threads resolving this problem. Thank you in advance.

    Read the article

  • Crystal Reports - export to pdf in MVC

    - by BhejaFry
    Hi folks, I have integrated the below code in my application to generate a 'pdf' file using crystal reports in MVC project. However, after the request is processed, i get to see only 2 pages in the pdf file while my 'data' returns more than 2 records. Also, the pdf isn't rendered as soon as the page is processed but instead i have to refresh atleast once, then the pdf is rendered on the browser. using CrystalDecisions.CrystalReports.Engine; public FileStreamResult Report() { ReportClass rptH = new ReportClass(); List<sampledataset> data = objdb.getdataset(); rptH.FileName = Server.MapPath("[reportName].rpt"); rptH.Load(); rptH.SetDatabaseLogon("un", "pwd", "server", "db"); rptH.SetDataSource(data); Stream stream = rptH.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat); stream.Seek(0, System.IO.SeekOrigin.Begin); return new FileStreamResult(stream, "application/pdf"); } I took the code from here in SO but modified it like above. TIA.

    Read the article

  • Publishing to live website

    - by Alienfluid
    Hey there, My friend and I are collaborating on a ASP.NET powered website. To develop it locally, we use Visual Web Developer Express (good enough for our needs). Subversion (using Tortoise SVN) is our source control of choice with the repository residing on Unfuddle.com. We run into problems when we need to update the live site - since there's no version control on it. Currently we use the "Copy to Website" feature in VWD which copies the files using FTP. Here are some problems: VWD only keeps track of files uploaded by one user, so if the other user uploads a newer version of a file to the live site, VWD on my side cannot tell whether the live version of the file is newer or mine is. There's no way to tell whether all the latest changes are available on the live site. We have to be careful not to party all over the shared web.config file since the other user's local DB settings are different from mine, and of course, the live DB settings are a whole other story! What do you guys use to publish to a live site? Does anything out there tie into Subversion so that we can automate the process and always guarantee that the live site is synced to a change list number? Also, how do you manage the different web.config file settings? Thanks!

    Read the article

  • Problems using a library in Xcode

    - by Pablo
    Hi! I'm actually developping an application for iPhone and I need to use a library, initially dedicated to a Linux environment. Since I'm using a Mac (with Snow Leopard and Intel Core Duo), I guess it's possible to use this library in my app. My library has 3 files: a file .h, a file .a and a file .so (both .a and .so are in /Developer/usr/lib). In addition I have included the .h i nmy code and I've added the .a in XCode has a framework (and it works because XCode find the .so compiling). For your info when I use the command "file" for the file .so, I have: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped When I compile for the Xcode Simulator, I have a warning and an error. The warning is: In /Developer/usr/lib/mylib.so, file was built for unsupported file format which is not the architecture being linked (i386) The error is: "_mylib_fct", referenced from: -[MyAppAppDelegate applicationDidBecomeActive:] in MyAppAppDelegate.o Symbol(s) not found Collect2: ld returned 1 exit status When I compile for the Device 3.0 with architecture arm6, I also have the same error, but the warning is quite different: ln /Users/Pablo/MyApp/mylib.a file is not of required architecture I try to solve this and make the app working with this lib since days, and I don't understand why the compiler is complaining... is it a 32/64 bits issues ? How can I deal with that ? Your help will be very appreciated. Thx!

    Read the article

  • Converting plain text in SQL to hyperlink in Access

    - by manemawanna
    Hello, I've just started a new job my first since leaving University and as part of this my first task is to convert a wholly Access 2003 database to an Access front-end, SQL back-end. The Access database consists of a series of front end forms to add or review staff data, as part of this there is hyperlinks pointing to a photograph of the employee and there CV located on a shared drive. These were saved as hyperlinks within the Access DB. I have since converted the data in the Access DB to SQL and stored it in a database for me to test on, now as part of the data conversion the photo and CV locations were converted to nvarchar from hyperlink. I've done this using SSMA. My issue now is that I need to have these text links displayed and working as hyperlinks on the front end hidden behind the words "Photo" and "CV", but am unsure as to how to go about this, as in the past I've only ever used SQL and not Access. Any help or suggestions would be appreciated and if I've not been clear in any areas please feel free to ask questions and I'll try to clear anything up for you.

    Read the article

  • Extend base type and automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? I would prefer to not do any reflection, so using a text template is not out of the question. A solution has been proposed to override SaveChanges and do it there, but in order to achieve this i would either have to use reflection (in which I don't want to do ) or derive a base class. Assuming i go down this route how would I achieve this? For example EXAMPLE_DB_TABLE CODE NAME --Audit Tables CREATE_DATE CREATE_USER UPDATE_DATE UPDATE_USER And if i create a base class public abstract class IUpdatable{ public virtual DateTime CreateDate {set;} public virtual string CreateUser { set;} public virtual DateTime UpdateDate { set;} public virtual string UpdateUser { set;} } The end goal is to be able to do something like... public overrride void SaveChanges(){ //Go through state manager and update audit infromation //FOREACH changed entity in state manager if(entity is IUpdatable){ //If state is created... update create audit. //if state is updated... update update audit } } But I am not sure how I go about generating the code that would extend the interface.

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • WCF Certificates without Certificate Store

    - by Kane
    My team is developing a number of WPF plug-ins for a 3rd party thick client application. The WPF plug-ins use WCF to consume web services published by a number of TIBCO services. The thick client application maintains a separate central data store and uses a proprietary API to access the data store. The thick client and WPF plug-ins are due to be deployed onto 10,000 workstations. Our customer wants to keep the certificate used by the thick client in the central data store so that they don't need to worry about re-issuing the certificate (current re-issue cycle takes about 3 months) and also have the opportunity to authorise the use of the certificate. The proposed architecture offers a form of shared secret / authentication between the central data store and the TIBCO services. Whilst I don’t necessarily agree with the proposed architecture our team is not able to change it and must work with what’s been provided. Basically our client wants us to build into our WPF plug-ins a mechanism which retrieves the certificate from the central data store (which will be allowed or denied based on roles in that data store) into memory then use the certificate for creating the SSL connection to the TIBCO services. No use of the local machine's certificate store is allowed and the in memory version is to be discarded at the end of each session. So the question is does anyone know if it is possible to pass an in-memory certificate to a WCF (.NET 3.5) service for SSL transport level encryption? Note: I had asked a similar question (here) but have since deleted it and re-asked it with more information.

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • What is the best practice for using lock within inherited classes

    - by JDMX
    I want to know if one class is inheriting from another, is it better to have the classes share a lock object that is defined at the base class or to have a lock object defined at each inheritance level. A very simple example of a lock object on each level of the class public class Foo { private object thisLock = new object(); private int ivalue; public int Value { get { lock( thisLock ) { return ivalue; } } set { lock( thisLock ) { ivalue= value; } } } } public class Foo2: Foo { private object thisLock2 = new object(); public int DoubleValue { get { lock( thisLock2 ) { return base.Value * 2; } } set { lock( thisLock2 ) { base.Value = value / 2; } } } } public class Foo6: Foo2 { private object thisLock6 = new object(); public int TripleDoubleValue { get { lock( thisLock6 ) { return base.DoubleValue * 3; } } set { lock( thisLock6 ) { base.DoubleValue = value / 3; } } } } A very simple example of a shared lock object public class Foo { protected object thisLock = new object(); private int ivalue; public int Value { get { lock( thisLock ) { return ivalue; } } set { lock( thisLock ) { ivalue= value; } } } } public class Foo2: Foo { public int DoubleValue { get { lock( thisLock ) { return base.Value * 2; } } set { lock( thisLock ) { base.Value = value / 2; } } } } public class Foo6: Foo2 { public int TripleDoubleValue { get { lock( thisLock ) { return base.DoubleValue * 3; } } set { lock( thisLock ) { base.DoubleValue = value / 3; } } } } Which example is the preferred way to manage locking within an inherited class?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >