Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 411/465 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Is XSLT worth investing time in and are there any actual alternatives?

    - by Keeno
    I realize this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the XML etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independent e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibilty to be able to easy reformat the content (I realize CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independent, and to be able to run "offline" i.e. without a server powering the HTML Negatives I've read so far for XSLT: Overhead? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • Best way to create a SPARQL endpoint for a RDBMS (MySQL database)

    - by Ankur
    I am doing (want to do) some experiments with Linked Open Datasets particularly those put out by governments. I have a RDBMS (more specifically MySQL). I designed it with semantic web ideas in mind i.e. I have a information stored as objects, predicates and classes which define objects. In turn all objects are related to each other though statements of the form subject -- predicate -- object (where the subjects are from the objects table). I want to be able to query other RDF triple stores from my application and let other triple stores query my data. Is it possible to "set something up" so that this is possible? I have looked at Jena. Using Jena seems to mean I have to it as a storage application rather than MySQL - the only problem with this is that I include a new concept called a category (which I don't think is part of the semantic web languages). I will use categories to help with displaying information (they don't have any other meaning) but using Jena seems to mean that I can't organise predicates under categories for more convenient viewing. I am using Java so a JAVA API is preferred. It's also possible I misunderstood the purpose of Jena, and maybe that can be of use, but I am not sure how. I am sure four days from now this question will seem rather silly, but at the moment I am somewhat confused about how to proceed.

    Read the article

  • Ajax request gets to server but page doesn't update - Rails, jQuery

    - by Jesse
    So I have a scenario where my jQuery ajax request is hitting the server, but the page won't update. I'm stumped... Here's the ajax request: $.ajax({ type: 'GET', url: '/jsrender', data: "id=" + $.fragment().nav.replace("_link", "") }); Watching the rails logs, I get the following: Processing ProductsController#jsrender (for 127.0.0.1 at 2010-03-17 23:07:35) [GET] Parameters: {"action"=>"jsrender", "id"=>"products", "controller"=>"products"} ... Rendering products/jsrender.rjs Completed in 651ms (View: 608, DB: 17) | 200 OK [http://localhost/jsrender?id=products] So, it seems apparent to me that the ajax request is getting to the server. The code in the jsrender method is being executed, but the code in the jsrender.rjs doesn't fire. Here's the method, jsrender: def jsrender @currentview = "shared/#{params[:id]}" respond_to do |format| format.js {render :template => 'products/jsrender.rjs'} end end For the sake of argument, the code in jsrender.rjs is: page<<"alert('this works!');" Why is this? I see in the params that there is no authenticity_token, but I have tried passing an authenticity_token as well with the same result. Thanks in advance.

    Read the article

  • Haml Inherit Templates

    - by kjfletch
    I'm using Haml (Haml/Sass 3.0.9 - Classy Cassidy) stand-alone to generate static HTML. I want to create a shared layout template that all my other templates inherit. Layout.haml %html %head %title Test Template %body .Content Content.haml SOMEHOW INHERIT Layout.haml SOMEHOW Change the title of the page "My Content". %p This is my content To produce: Content.html <html> <head> <title>My Content</title> </head> <body> <div class="Content"> <p>This is my content</p> </div> </body> </html> But this doesn't seem possible. I have seen the use of rendering partials when using Haml with Rails but can't find any solution when using Haml stand-alone. Having to put the layout code in all of my templates would be a maintenance nightmare; so my question is how do I avoid doing this? Is there a standard way to solve this problem? Have I missed something fundamental?

    Read the article

  • MS Access Bulk Insert exits app

    - by Brij
    In the web service, I have to bulk insert data in MS Access database. First, There was single complex Insert-Select query,There was no error but app exited after inserting some records. I have divided the query to make it simple and using linq to remove complexity of query. now I am inserting records using for loop. there are approx 10000 records. Again, same problem. I put a breakpoint after loop, but No hit. I have used try catch, but no error found. For Each item In lstSelDel Try qry = String.Format("insert into Table(Id,link,file1,file2,pID) values ({0},""{1}"",""{2}"",""{3}"",{4})", item.WebInfoID, item.Links, item.Name, item.pName, pDateID) DAL.ExecN(qry) Catch ex As Exception Throw ex End Try Next Public Shared Function ExecN(ByVal SQL As String) As Integer Dim ret As Integer = -1 Dim nowConString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + System.AppDomain.CurrentDomain.BaseDirectory + "DataBase\\mydatabase.mdb;" Dim nowCon As System.Data.OleDb.OleDbConnection nowCon = New System.Data.OleDb.OleDbConnection(nowConString) Dim cmd As New OleDb.OleDbCommand(SQL, nowCon) nowCon.Open() ret = cmd.ExecuteNonQuery() nowCon.Close() nowCon.Dispose() cmd.Dispose() Return ret End Function After exiting app, I see w3wp.exe uses more than 50% of Memory. Any idea, What is going wrong? Is there any limitation of MS Access?

    Read the article

  • Why does multiple sessions started on the same page gets the same session id?

    - by Calmarius
    I tryed the following: <?php session_name('user'); session_start(); // sets an 'user' cookie with session id. // This session stores the user's login data. // Its an ordinary session. $USERSESSION=$_SESSION; // saving this session $userSessId=session_id(); session_write_close(); //closing this session session_name('chatroom'); session_start(); // set a 'chatroom' cookie but contains the same session id and points to the same data :( . // This session would be used to store the chat text. This session would be // shared between all users joined in this chat room by setting the same session id for all members. // by calling session_regenerate_id would make a diffent session id for every user and it won't be a chat room anymore. ?> So I want to do a chat like thing with sessions. On the client side it would be done with ajax that polls this php page in every 5-10 seconds. Sessions may be cached in the server's memory so it can be accessible fast. I can store the chat in the database but my service runs on a free webhost which is limited, only 4 mysql connections allowed at a time which is almost nothing. I try to touch my database as least times as possible.

    Read the article

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • LoadError in Ruby

    - by wilhelmtell
    I'm having issues requiring 'digest/sha1'. ~$ ./configure --prefix=$HOME/usr --program-suffix=19 --enable-shared ~$ make ~$ make install ~$ irb19 irb(main):001:0> require 'digest/sha1' LoadError: dlopen(/Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle, 9): Symbol not found: _rb_Digest_SHA1_Finish Referenced from: /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle Expected in: flat namespace - /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle from (irb):1:in `require' from (irb):1 from /Users/matan/usr/bin/irb19:12:in `<main>' irb(main):002:0> I know some standard modules require fine, while others don't. If i'd say require 'yaml' or even require 'digest' then that works fine. I am using OS X 10.5.8, with Ruby 1.9.1-p378. The system-wide install of Ruby 1.8.6 works fine. Just last week I uninstalled Ruby and re-installed it. When I first installed Ruby I installed it in a similar manner, from source prefixed at my local $HOME/usr directory. I tried removing each and every file make install installs, then re-installing, but that didn't help. Do you have an idea what the issue is and how to resolve it?

    Read the article

  • HTML not appearing in Emails

    - by John
    My website sends out html emails but most of my recipients are receiving them as HTML marked up source pages instead of the nice table layouts. The problem doesn't appear to be an email client issue since the emails display properly in web mail clients like gmail, yahoo, hotmail etc... They also display properly when viewed through outlook or thunderbird that are connected to gmail, yahoo, hotmail etc... However, I have one domain name that I registered with a shared hosting provider called 1and1.com. I tried viewing my emails through their webmail client, thunderbird and outlook, but in all three cases, only the html mark up showed up. Also, I assume most of my recipients use MS Outlook with MS Exchange Server because they are business/finance people. Unfortunately, I don't know how to get an email that's managed by an MS Exchange Server. I made sure I'm sending my emails with the following headers: MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 Does anyone know what might be wrong? can anyone recommend a solution?

    Read the article

  • Servlet Security question about j_security_check, j_username and j_password

    - by Nitesh Panchal
    Hello, I used jdbcRealm in my web application and it's working fine. I defined all constraints also in my web.xml. Like all pages of url pattern /Admin/* should be accessed by only admin. I have a login form with uses standard j_security_check, j_username and j_password. Now, when i type Admin/home.jsf it rightly redirects me login.jsf and there when i type the password i am redirected to home.jsf. This works alright but problem comes i directly go to login.jsf and then type password and username. This time it again redirects me to login.jsf. Is there any way through which i can specify which page to go when successful login is there? I need to specify different different pages for different roles. For Admin, it is /Admin/home.jsf for general users it is /General/home.jsf because login form is shared between different type of users. Where do i specify all these things? Secondly, i want to have a remember me checkbox at the end of login form. How do i do this? By default, it is submitted to j_security_check servlet and i have no control over its execution. Please help. This doesn't seem so hard but looks like i am missing something.

    Read the article

  • Java Concurrency in practice sample question

    - by andy boot
    I am reading "Java Concurrency in practice" and looking at the example code on page 51. This states that if a thread has references to a shared object then other threads may be able to access that object before the constructor has finished executing. I have tried to put this into practice and so I wrote this code thinking that if I ran it enough times a RuntimeException("World is f*cked") would occur. But it isn't doing. Is this a case of the Java spec not guaranting something but my particular implementation of java guaranteeing it for me? (java version: 1.5.0 on Ubuntu) Or have I misread something in the book? Code: (I expect an exception but it is never thrown) public class Threads { private Widgit w; public static void main(String[] s) throws Exception { while(true){ Threads t = new Threads(); t.runThreads(); } } private void runThreads() throws Exception{ new Checker().start(); w = new Widgit((int)(Math.random() * 100) + 1); } private class Checker extends Thread{ private static final int LOOP_TIMES = 1000; public void run() { int count = 0; for(int i = 0; i < LOOP_TIMES; i++){ try { w.checkMe(); count++; } catch(NullPointerException npe){ //ignore } } System.out.println("checked: "+count+" times out of "+LOOP_TIMES); } } private static class Widgit{ private int n; private int n2; Widgit(int n) throws InterruptedException{ this.n = n; Thread.sleep(2); this.n2 = n; } void checkMe(){ if (n != n2) { throw new RuntimeException("World is f*cked"); } } } }

    Read the article

  • Artisan unable to access environment variables from $_ENV

    - by hansn
    Any artisan command I enter into the command line throws this error: $ php artisan <? return array( 'DB_HOSTNAME' => 'localhost', 'DB_USERNAME' => 'root', 'DB_NAME' => 'pc_booking', 'DB_PASSWORD' => 'secret', ); PHP Warning: Invalid argument supplied for foreach() in /home/martin/code/www/pc_backend/vendor/laravel/framework/src/Illuminate/Config/EnvironmentVariables.php on line 35 {"error":{"type":"ErrorException","message":"Undefined index: DB_HOSTNAME","file":"\/home\/martin\/code\/www\/pc_backend\/app\/config\/database.php","line":57}} This is only on my local development system, where I recently installed apache and php. On my production system on a shared host artisan commands work just fine. The prod system has it's own .env.php, but other than that the code should be identical. Relevant files: .env.local.php <? return array( 'DB_HOSTNAME' => 'localhost', 'DB_USERNAME' => 'root', 'DB_NAME' => 'pc_booking', 'DB_PASSWORD' => 'secret', ); app/config/database.php <?php return array( 'fetch' => PDO::FETCH_CLASS, 'default' => 'mysql', 'connections' => array( 'mysql' => array( 'driver' => 'mysql', 'host' => $_ENV['DB_HOSTNAME'], 'database' => $_ENV['DB_NAME'], 'username' => $_ENV['DB_USERNAME'], 'password' => $_ENV['DB_PASSWORD'], 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), ), 'migrations' => 'migrations', ), ); The $_ENV array is populated as expected on the website - the problem appears to be with artisan only.

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Negative number representation across multiple architechture

    - by Donotalo
    I'm working with OKI 431 micro controller. It can communicate with PC with appropriate software installed. An EEPROM is connected in the I2C bus of the micro which works as permanent memory. The PC software can read from and write to this EEPROM. Consider two numbers, B and C, each is two byte integer. B is known to both the PC software and the micro and is a constant. C will be a number so close to B such that B-C will fit in a signed 8 bit integer. After some testing, appropriate value for C will be determined by PC and will be stored into the EEPROM of the micro for later use. Now the micro can store C in two ways: The micro can store whole two byte representing C The micro can store B-C as one byte signed integer, and can later derive C from B and B-C I think that two's complement representation of negative number is now universally accepted by hardware manufacturers. Still I personally don't like negative numbers to be stored in a storage medium which will be accessed by two different architectures because negative number can be represented in different ways. For you information, 431 also uses two's complement. Should I get rid of the headache that negative number can be represented in different ways and accept the one byte solution as my other team member suggested? Or should I stick to the decision of the two byte solution because I don't need to deal with negative numbers? Which one would you prefer and why?

    Read the article

  • [C++][OpenMP] Proper use of "atomic directive" to lock STL container

    - by conradlee
    I have a large number of sets of integers, which I have, in turn, put into a vector of pointers. I need to be able to update these sets of integers in parallel without causing a race condition. More specifically. I am using OpenMP's "parallel for" construct. For dealing with shared resources, OpenMP offers a handy "atomic directive," which allows one to avoid a race condition on a specific piece of memory without using locks. It would be convenient if I could use the "atomic directive" to prevent simultaneous updating to my integer sets, however, I'm not sure whether this is possible. Basically, I want to know whether the following code could lead to a race condition vector< set<int>* > membershipDirectory(numSets, new set<int>); #pragma omp for schedule(guided,expandChunksize) for(int i=0; i<100; i++) { set<int>* sp = membershipDirectory[5]; #pragma omp atomic sp->insert(45); } (Apologies for any syntax errors in the code---I hope you get the point) I have seen a similar example of this for incrementing an integer, but I'm not sure whether it works when working with a pointer to a container as in my case.

    Read the article

  • Function behaviour on shell(ksh) script

    - by footy
    Here are 2 different versions of a program: this Program: #!/usr/bin/ksh printmsg() { i=1 print "hello function :)"; } i=0; echo I printed `printmsg`; printmsg echo $i Output: # ksh e I printed hello function :) hello function :) 1 and Program: #!/usr/bin/ksh printmsg() { i=1 print "hello function :)"; } i=0; echo I printed `printmsg`; echo $i Output: # ksh e I printed hello function :) 0 The only difference between the above 2 programs is that printmsg is 2times in the above program while printmsg is called once in the below program. My Doubt arises here: To quote Be warned: Functions act almost just like external scripts... except that by default, all variables are SHARED between the same ksh process! If you change a variable name inside a function.... that variable's value will still be changed after you have left the function!! But we can clearly see in the 2nd program's output that the value of i remains unchanged. But we are sure that the function is called as the print statement gets the the output of the function and prints it. So why is the output different in both?

    Read the article

  • How do I handle the messages for a simple web-based live chat, on the server side?

    - by Carson Myers
    I'm building a simple live chat into a web application running on Django, but one thing I'm confused about is how I should store the messages between users. The chat will support multiple users, and a chat "session" is composed of users connected to one user that is the "host." The application is a sort of online document collaboration thing, so user X has a document, and users Y and Z would connect to user X to talk about the document, and that would be one chat session. If user Y disconnected for five minutes, and then signed back in and reconnected to user X, he should not get any of the messages shared between users X and Z while he was away. if users X, Y, and Z can have a chat session about user X's document, then users X and Y can connect to a simultaneous, but separate discussion about user Z's document. How should I handle this? Should I keep each message in the database? Each message would have an owner user and a target user (the host), and a separate table would be used to connect users with messages (which messages are visible to what users). Or should I store each session as an HTML file on the server, which messages get appended to? The problem is, I can't just send messages directly between clients. They have to be sent to the server in a POST request, and then each client has to periodically check for the messages in a GET request. Except I can't just have each message cleared after a client fetches it, because there could be multiple clients. How should I set this up? Any suggestions?

    Read the article

  • How to force inclusion of an object file in a static library when linking into executable?

    - by Brian Bassett
    I have a C++ project that due to its directory structure is set up as a static library A, which is linked into shared library B, which is linked into executable C. (This is a cross-platform project using CMake, so on Windows we get A.lib, B.dll, and C.exe, and on Linux we get libA.a, libB.so, and C.) Library A has an init function (A_init, defined in A/initA.cpp), that is called from library B's init function (B_init, defined in B/initB.cpp), which is called from C's main. Thus, when linking B, A_init (and all symbols defined in initA.cpp) is linked into B (which is our desired behavior). The problem comes in that the A library also defines a function (Af, defined in A/Afort.f) that is intended to by dynamically loaded (i.e. LoadLibrary/GetProcAddress on Windows and dlopen/dlsym on Linux). Since there are no references to Af from library B, symbols from A/Afort.o are not included into B. On Windows, we can artifically create a reference by using the pragma: #pragma comment (linker, "/export:_Af") Since this is a pragma, it only works on Windows (using Visual Studio 2008). To get it working on Linux, we've tried adding the following to A/initA.cpp: extern void Af(void); static void (*Af_fp)(void) = &Af; This does not cause the symbol Af to be included in the final link of B. How can we force the symbol Af to be linked into B?

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • Does .Net use Device Dependent or Device Independent Bitmaps?

    - by Brian
    When loading an image into memory, does .Net use DDB, DIB, or something else entirely? If possible, please cite your sources. I'm wondering because we currently have a classic ASP application that is using a 3rd party component to load images that is occasionally creating a “Not enough storage is available to process this command.” error. The error is very inconsistent but tends to happen on larger images (not always, but often). After resetting IIS, processing the same file again typically works just fine. After much research I have found that DDBs tend to have this problem when processing large images because they work out of video memory. Considering that we are running on a web server with an integrated video card and limited shared memory, this could certainly be our problem. We are in the early stages of converting our app to .Net and am wondering if using .Net for this might be a viable alternative to our current method which is why I am asking the question. Any advice is welcome :) but out of curiosity if nothing else, I am really hoping for an answer to the question; does .Net use DDB or DIB?

    Read the article

  • C++ Matrix class hierachy

    - by bpw1621
    Should a matrix software library have a root class (e.g., MatrixBase) from which more specialized (or more constrained) matrix classes (e.g., SparseMatrix, UpperTriangluarMatrix, etc.) derive? If so, should the derived classes be derived publicly/protectively/privately? If not, should they be composed with a implementation class encapsulating common functionality and be otherwise unrelated? Something else? I was having a conversation about this with a software developer colleague (I am not per se) who mentioned that it is a common programming design mistake to derive a more restricted class from a more general one (e.g., he used the example of how it was not a good idea to derive a Circle class from an Ellipse class as similar to the matrix design issue) even when it is true that a SparseMatrix "IS A" MatrixBase. The interface presented by both the base and derived classes should be the same for basic operations; for specialized operations, a derived class would have additional functionality that might not be possible to implement for an arbitrary MatrixBase object. For example, we can compute the cholesky decomposition only for a PositiveDefiniteMatrix class object; however, multiplication by a scalar should work the same way for both the base and derived classes. Also, even if the underlying data storage implementation differs the operator()(int,int) should work as expected for any type of matrix class. I have started looking at a few open-source matrix libraries and it appears like this is kind of a mixed bag (or maybe I'm looking at a mixed bag of libraries). I am planning on helping out with a refactoring of a math library where this has been a point of contention and I'd like to have opinions (that is unless there really is an objective right answer to this question) as to what design philosophy would be best and what are the pros and cons to any reasonable approach.

    Read the article

  • Another developer revoked and re-created my client's iOS Distribution Certificate - does this mean I can never update my client's existing app?

    - by Schnapple
    Here is the story so far: A client hired us to do an iPhone app for them. This client had never done an iPhone app before and as part of the arrangement we handled all aspects for them, including app store submission, and we handle some level of future development (new features, bug/security fixes, etc.) We created a Distribution certificate and key pair on the client's behalf We developed the app, published it to the App Store without incident Some time later the client hired a second developer to do a different app for them This second developer, it appears, has revoked the existing Distribution certificate and created a new one with a new key pair on their system This second developer shared the new Distribution certificate and key pair with us for future reference. Due to user error, this new certificate and key pair has now been imported onto the Macintosh where the original certificate and key pair for the original app we developed were created and the originals were not backed up. So we have App #1 on the App Store with Distribution certificate/key pair #1 App #2 either on the App Store or soon to be using Distribution certificate/key pair #2 Distribution certificate/key pair #1 appears to be lost now So my question is: if we ever need to update App #1, will we be able to, using Distribution certificate/key pair #2? Or will we have to upload it as a new app?

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >