Search Results

Search found 16023 results on 641 pages for 'knowledge management'.

Page 579/641 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • how to programatically compare permissions of login/user in sql server 2005

    - by titanium
    There's a login/user in SQL Server who is having a problem importing accounts in production server. I don't have an idea what method he is doing this. According to the one importing, this import is working fine in development server. But when he did the same import in production it is giving him errors. Below are the errors he is getting for each accounts. 2009-06-05 18:01:05.8254 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow has failed 2009-06-05 18:01:05.9035 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow exception: Exception caught while storing Data. [Microsoft][ODBC SQL Server Driver][SQL Server]'ACCOUNT1' is not a valid login or you do not have permission. Please note that 'ACCOUNT1' is not the real account name. I just changed it for security reason. Using SQL Server Management Studio (SSMS), I viewed/checked the permissions of the user/login who is performing the import from development server and production for comparison. I found no difference. My question is: Is there a way to programmatically query permissions in server and database level of a particular login/user so I can compare/contrast for any differences?

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • Forming triangles from points and relations

    - by SiN
    Hello, I want to generate triangles from points and optional relations between them. Not all points form triangles, but many of them do. In the initial structure, I've got a database with the following tables: Nodes(id, value) Relations(id, nodeA, nodeB, value) Triangles(id, relation1_id, relation2_id, relation3_id) In order to generate triangles from both nodes and relations table, I've used the following query: INSERT INTO Triangles SELECT t1.id, t2.id , t3.id, FROM Relations t1, Relations t2, Relations t3 WHERE t1.id < t2.id AND t3.id > t1.id AND ( t1.nodeA = t2.nodeA AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeB OR t3.nodeA = t2.nodeB AND t3.nodeB = t1.nodeB) OR t1.nodeA = t2.nodeB AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeA OR t3.nodeA = t2.nodeA AND t3.nodeB = t1.nodeB) ) It's working perfectly on small sized data. (~< 50 points) In some cases however, I've got around 100 points all related to each other which leads to thousands of relations. So when the expected number of triangles is in the hundreds of thousands, or even in the millions, the query might take several hours. My main problem is not in the select query, while I see it execute in Management Studio, the returned results slow. I received around 2000 rows per minute, which is not acceptable for my case. As a matter of fact, the size of operations is being added up exponentionally and that is terribly affecting the performance. I've tried doing it as a LINQ to object from my code, but the performance was even worse. I've also tried using SqlBulkCopy on a reader from C# on the result, also with no luck. So the question is... Any ideas or workarounds?

    Read the article

  • Creating Modules for VB Program (Similar to firefox add on)

    - by Assimilater
    Hi all. I have a contact manager program that I'd like to design for multiple network marketing companies. My desired structure would include a core program which covers basic contact management functions. After that one could purchase a module for their specific network marketing company. This module would contain a variety of controls that the original program would need to be able to manipulate. Here is an example of what it would have to have: A group box containing buttons that link to a genealogy view, and the option to import one's donwnline from the back office provided by a company. A panel which is displayed on the contact page allowing the user to input business information or which will be filled by importing a downline from the back office: ie business ID, qualification level, sponsor information etc. a panel displayed when one searches for contacts on the contact list page which allows the user to sort based on information such as when they joined, what their business id is and so forth. a panel which is displayed in a personal business overview page which presents to an individual how many people in their downline are at each qualification level and develop a mailing list for individuals of a certain qualification level. I have the code developed to perform all these functions, I just wanted to give you an example of what needed to be done. I'm thinking that what I'm trying to create is a library that one can download and the program will recognize, but I'm not really sure where to go. What I'm really trying to do is figure out what kind of file I can make that will contain all this code and the GUI information that the program will recognize. Any ideas? With thanks, John

    Read the article

  • Help me plan larger Qt project

    - by Pirate for Profit
    I'm trying to create an automated task management system for our company, because they pay me to waste my time. New users will create a "profile", which will store all the working data (I guess serialize everything into xml rite?). A "profile" will contain many different tasks. Tasks are basically just standard computer janitor crap such as moving around files, reading/writing to databases, pinging servers, etc.). So as you can see, a task has many different jobs they do, and also that tasks should run indefinitely as long as the user somehow generates "jobs" for them. There should also be a way to enable/disable (start/pause) tasks. They say create the UI first so... I figure the best way to represent this is with a list-view widget, that lists all the tasks in the profile. Enabled tasks will be bold, disabled will be then when you double-click a task, a tab in the main view opens with all the settings, output, errors,. You can right click a task in the list-view to enable/disable/etc. So each task will be a closable tab, but when you close it just hides. My question is: should I extend from QAction and QTabWidget so I can easily drop tasks in and out of my list-view and tab bar? I'm thinking some way to make this plugin-based, since a lot of the plugins may share similar settings (like some of the same options, but different infos are input). Also, what's the best way to set up threading for this application?

    Read the article

  • Magento: The best way to hook into the checkout process

    - by dan.codes
    I am integrating with a third party order management system and I have to make calls to it throughout the checkout process. The problem is, I don't think there are many events available because of how the onepage checkout is all done in javascript/ajax calls. There are a few like after saving the shipping method, and none of the dynamic events seem to fit either. basically I need to know as soon as the user is getting access to the shipping method tab to pass the billing shipping address over, then after the shipping method, to pass that over. Obviously there is an event for that. I know there are ones for when you submit an order so that should be good. I guess I only need to know when the billing/shipping address is saved. I was using controller_action_layout_render_before_checkout_onepage_progress but the progress gets called way to late. It just doesn't seem like there are a lot of hooks through the onepage checkout. if anyone can give me some examples of what they have done that would be great!

    Read the article

  • Will this web service accept both raw xml and an object?

    - by ChadNC
    We have a web service that provides auto insurance quotes and a company that provides an insurance agency management system would like to use the web service for thier client but they want to pass the web service raw xml instead of using the wsdl to create a port, the object the service expects and calling the web method. The web service has performed flawlessly by creating an object like so com.insurance.quotesvc.AgencyQuote service = new com.insurance.quotesvc.AgencyQuote(); com.insurance.quotesvc.QuotePortType port = service.getQuotePortType(); com.insurance.quotesvc.schemas.request.ACORD parameter = null; Then create initialize the request object with the other objects that make up the response. parameter = factory.createACORD(); parameter.setSignonRq(signOn); parameter.setInsurancesSvcRq(svcRq); And send the request to the web service. com.insurance.quotesvc.schemas.response.ACORD result = null; result = port.requestQuote(parameter); By doing that I am able to easily marshall the request and the result into an xml file and do with them as I wish. So if a client was to send the web service via an http post as raw xml inside of a soap envelope. Would the web service be able to handle the xml without any changes being made to the web service or would there need to be changes made to the web service in order for it to handle a request of that type? The web service is a JAX_WS and we currently have both Java and C# clients consuming the web service using the method described above but now there is another client who wants to send raw xml inside of a soap envelope instead of creating the objects. I feel pretty sure that they will be making the call to the web service using vb. I'm sure I'm missing something obvious but it is eluding me at the moment and any help is greatly appreciated.

    Read the article

  • How to Set up Virtual Static Subdomain

    - by Chip D
    Given current rewrite rules at http://www.example.com/: Options +FollowSymlinks +Includes RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] RewriteCond %{HTTP_HOST} !^$ RewriteRule ^/?(.*) http://www.example.com/$1 [L,R,NE] # Remove all "index.html"s. RewriteCond %{THE_REQUEST} \ /(.+/)?index\.html(\?.*)?\ [NC] RewriteRule ^(.+/)?index\.html$ /%1 [R=301,L] # Remove all "index.php"s. RewriteCond %{THE_REQUEST} \ /(.+/)?index\.php(\?.*)?\ [NC] RewriteRule ^(.+/)?index\.php$ /%1 [R=301,L] I'm attempting to serve some site assets (.png|.ico|.jpg|.gif|.css|.js) from a static subdomain like http://static.example.com which my Apache 1.3 shared host (GoDaddy) has mapped to a subdirectory for file management (http://www.example.com/static/). Currently these assets live in same-site subdirectories such as images/, js/, css/, etc. (1) Something...maybe the "+www" rewrite rule?...is not letting me access the subdomain. Is this likely caused by my rewrite rules above, or do I need to set up DNS changes with the host to enable access to the subdirectory, in addition to changing the rewrite rules? (2) Do I have to move those assets to that subdirectory and change all references sitewide? Would that perform the fastest? (3) Or can .htaccess make this much easier? (4) Should my first rewrite rule above include L,R=301,NE instead of [L,R,NE]?

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Bind an Incode DataTemplate in WPF

    - by Mike Bynum
    I have a WPF Application which is using MVVM. I know that there ways of doing this in XAML but I am working on a plugin architecture and came up with a solution where a plugin exposes it's viewmodel to my plugin host's viewmodel and it's datatemplate. I want to leave the lifetime management of the plugin view up to WPF. I have tried having the plugins expose a UserControl but ran into issues when WPF decided to dispose of my UserControl so I would not reattach it without weird hacky work arounds. I am having issues getting some sort of binding working to where i can bind a control to the data and it's template to my data template. I have a ViewModel which looks something like: public class MyViewModel { public DataTemplate SelectedTemplate{ get; set;} public object SelectedViewModel {get; set;} } The selected template and viewmodel are determined somewhere else in the code but are irrelevant to my question. My question is how i can bind to a DataTemplate so that I know how to display the data shown in the SelectedViewModel. The DataTemplate is a DataTemplate created incode which respresents: <DataTemplate DataType="{x:Type vm:MyViewModel}"> <v:MyUserControl /> </DataTemplate> I have tried: <UserControl Template="{Binding Path=SelectedTemplate}" Content="{Binding Path=SelectedViewModel"} /> But UserControl expects a control template and not a data template.

    Read the article

  • How to control access to third party HTML pages

    - by Wylie
    Hello, We have a Learning Management System (LMS) that runs on its own server (IIS/Server 2003). Students must login with Forms authentication to gain access to the content. We want to offer access to third party flash and audio that is embedded in HTML pages hosted on the third party server (IIS/Server 2003). Currently we use a frame in a pop-up window that is populated via a simple URL to the third party HTML pages. How can the third party control access to their content, so that only students who launch the pop-up windows from our site can access their content? Since the content is mostly video and flash, we would prefer not to stream all of their content through our server to the Student. We have a programming staff, so we could maybe... - either post or get for our HTTP request to the third party server - we could use SSL - we could programmatically assign a global NT user account to all of our users and then do some kind of Active Directory login from the LMS server to the third party server - could the third party content be hosted at Amazon S3? Would this allow for secure access/download? These are just ideas. We really have no idea. Any suggestions would be greatly appreciated. TIA, Wylie

    Read the article

  • Entity Framework does not map 2 columns from a SqlQuery calling a stored procedure

    - by user1783530
    I'm using Code First and am trying to call a stored procedure and have it map to one of my classes. I created a stored procedure, BOMComponentChild, that returns details of a Component with information of its hierarchy in PartsPath and MyPath. I have a class for the output of this stored procedure. I'm having an issue where everything except the two columns, PartsPath and MyPath, are being mapped correctly with these two properties ending up as Nothing. I searched around and from my understanding the mapping bypasses any Entity Framework name mapping and uses column name to property name. The names are the same and I'm not sure why it is only these two columns. The last part of the stored procedure is: SELECT t.ParentID ,t.ComponentID ,c.PartNumber ,t.PartsPath ,t.MyPath ,t.Layer ,c.[Description] ,loc.LocationID ,loc.LocationName ,CASE WHEN sup.SupplierID IS NULL THEN 1 ELSE sup.SupplierID END AS SupplierID ,CASE WHEN sup.SupplierName IS NULL THEN 'Scinomix' ELSE sup.SupplierName END AS SupplierName ,c.Active ,c.QA ,c.IsAssembly ,c.IsPurchasable ,c.IsMachined ,t.QtyRequired ,t.TotalQty FROM BuildProducts t INNER JOIN [dbo].[BOMComponent] c ON c.ComponentID = t.ComponentID LEFT JOIN [dbo].[BOMSupplier] bsup ON bsup.ComponentID = t.ComponentID AND bsup.IsDefault = 1 LEFT JOIN [dbo].[LookupSupplier] sup ON sup.SupplierID = bsup.SupplierID LEFT JOIN [dbo].[LookupLocation] loc ON loc.LocationID = c.LocationID WHERE (@IsAssembly IS NULL OR IsAssembly = @IsAssembly) ORDER BY t.MyPath and the class it maps to is: Public Class BOMComponentChild Public Property ParentID As Nullable(Of Integer) Public Property ComponentID As Integer Public Property PartNumber As String Public Property MyPath As String Public Property PartsPath As String Public Property Layer As Integer Public Property Description As String Public Property LocationID As Integer Public Property LocationName As String Public Property SupplierID As Integer Public Property SupplierName As String Public Property Active As Boolean Public Property QA As Boolean Public Property IsAssembly As Boolean Public Property IsPurchasable As Boolean Public Property IsMachined As Boolean Public Property QtyRequired As Integer Public Property TotalQty As Integer Public Property Children As IDictionary(Of String, BOMComponentChild) = New Dictionary(Of String, BOMComponentChild) End Class I am trying to call it like this: Me.Database.SqlQuery(Of BOMComponentChild)("EXEC [BOMComponentChild] @ComponentID, @PathPrefix, @IsAssembly", params).ToList() When I run the stored procedure in management studio, the columns are correct and not null. I just can't figure out why these won't map as they are the important information in the stored procedure. The types for PartsPath and MyPath are varchar(50).

    Read the article

  • VB6 ActiveX exe - what is the proper registration sequence?

    - by Timbuck
    I have recently updated a Visual Basic 6 application that is an ActiveX exe, running on Windows XP. I have a couple of testers for this application who have received a copy of the exe and are attempting to run it. However, they are getting an error message "Unexpected error;quitting" when trying to do so. A key difference between their testing and my testing is that on the machines I tested on, I have admin rights and was able to register the application using the appname.exe /regserver command line. Reading the details at MS Support about file registration appears unclear: Visual Basic ActiveX EXE files register themselves the first time you run the EXE. However, you cannot use the EXE as a COM server until it is registered. So does this mean that after the first time the users run the exe that the application should be correctly registered, and the error I am receiving is sign of something other than an incorrectly registered application? Or does this mean that the application will not work properly until such time as the file is explicitly registered using the appname.exe /regserver command line? nb - during a production distribution, the software would be sent out to client PCs using Systems Management Server, which isn't an option for this testing.

    Read the article

  • Python-based password tracker (or dictionary)

    - by Arrieta
    Hello: Where we work we need to remember about 10 long passwords which need to change every so often. I would like to create a utility which can potentially save these passwords in an encrypted file so that we can keep track of them. I can think of some sort of dictionary passwd = {'host1':'pass1', 'host2':'pass2'}, etc, but I don't know what to do about encryption (absolutely zero experience in the topic). So, my question is really two questions: Is there a Linux-based utility which lets you do that? If you were to program it in Python, how would you go about it? A perk of approach two, would be for the software to update the ssh public keys after the password has been changed (you know the pain of updating ~15 tokens once you change your password). As it can be expected, I have zero control over the actual network configuration and the management of scp keys. I can only hope to provide a simple utility to me an my very few coworkers so that, if we need to, we can retrieve a password on demand. Cheers.

    Read the article

  • How add logic to Views? Ruby on Rails

    - by Gotjosh
    Right now I'm building a project management app in rails, here is some background info: Right now i have 2 models, one is User and the other one is Client. Clients and Users have a one-to-one relationship (client - has_one and user - belongs_to which means that the foreign key it's in the users table) So what I'm trying to do it's once you add a client you can actually add credentials (add an user) to that client, in order to do so all the clients are being displayed with a link next to that client's name meaning that you can actually create credentials for that client. What i can't figure it out how to do is, that if you actually have credentials in the database (meaning that there's a record in the users table with your client id) then don't display that link. Here's what i thought that would work <% for client in @client%> <h5> <h4><%= client.id %></h4> <a href="/clients/<%= client.id %>"><%= client.name %></a> <% for user in @user %> <% if user.client_id = client.id %> <a href="/clients/<%= client.id %>/user/new">Credentials</a> <%end%> <% end %> </h5> <% end %> And here's the controller: def index @client = Client.find_all_by_admin(0) @user = User.find(:all) end but instead it just puts the link the amount of times per records in the user table. Any help?

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • Passing C++ object to C++ code through Python?

    - by cornail
    Hi all, I have written some physics simulation code in C++ and parsing the input text files is a bottleneck of it. As one of the input parameters, the user has to specify a math function which will be evaluated many times at run-time. The C++ code has some pre-defined function classes for this (they are actually quite complex on the math side) and some limited parsing capability but I am not satisfied with this construction at all. What I need is that both the algorithm and the function evaluation remain speedy, so it is advantageous to keep them both as compiled code (and preferrably, the math functions as C++ function objects). However I thought of glueing the whole simulation together with Python: the user could specify the input parameters in a Python script, while also implementing storage, visualization of the results (matplotlib) and GUI, too, in Python. I know that most of the time, exposing C++ classes can be done, e.g. with SWIG but I still have a question concerning the parsing of the user defined math function in Python: Is it possible to somehow to construct a C++ function object in Python and pass it to the C++ algorithm? E.g. when I call f = WrappedCPPGaussianFunctionClass(sigma=0.5) WrappedCPPAlgorithm(f) in Python, it would return a pointer to a C++ object which would then be passed to a C++ routine requiring such a pointer, or something similar... (don't ask me about memory management in this case, though :S) The point is that no callback should be made to Python code in the algorithm. Later I would like to extend this example to also do some simple expression parsing on the Python side, such as sum or product of functions, and return some compound, parse-tree like C++ object but let's stay at the basics for now. Sorry for the long post and thx for the suggestions in advance.

    Read the article

  • Quickly accessing files in a 'project'

    - by bbbscarter
    Hi all. I'm looking for a way to quickly open files in my project's source tree. What I've been doing so far is adding files to the file-name-cache like so: (file-cache-add-directory-recursively (concat project-root "some/sub/folder") ".*\\.\\(py\\)$") after which I can use anything-for-files to access any file in the source tree with about 4 keystrokes. Unfortunately, this solution started falling over today. I've added another folder to the cache and emacs has started running out of memory. What's weird is that this folder contains less than 25% of files I'm adding, and yet emacs memory use goes up from 20mb to 400mb on adding just this folder. The total number of files is around 2000, so this memory use seems very high. Presumably I'm abusing the file cache. Anyway, what do other people do for this? I like this solution for its simplicity and speed; I've looked at some of the many, many project management packages for emacs and none of them really grabbed me... Thanks in advance! Simon

    Read the article

  • Expose webservice directly to webclients or keep a thin server-side script layer in between?

    - by max
    Hi, I'm developing a REST webservice (Java, Jersey). The people I'm doing this for want to directly access the webservice via Javascript. Some instinct tells me this is not a good idea, but I cannot really explain that instinct. My natural approach would have been to have the webservice do the real logic and database access, but also have some (relatively thin) server-side script layer (e.g. in PHP). Clients would talk to the PHP layer which in turn would talk to the webservice. (The webservice would be pretty local to the apache/PHP server and implicitly trust calls from the script layer. The script layer would take care of session management.) (Btw, I am not talking about just hiding the webservice behind an Apache which simply redirects calls.) But as I find myself at a lack of words/arguments to explain my instinct, I wonder whether my instinct is right - note that while I have been developing all kinds of software in all kinds of languages and frameworks for like 17 years, this is the first time I develop a webservice. So my question is basically: what are your opinions? Are there any standard setups? Is my instinct totally wrong? Or partially? ;P Many thanks, Max PS: I might add a few bits of information about the planned usage of the whole application: will be accessed by different kinds of users, partly general public, partly privileged thus, all major OS/browser combinations can be expected as clients however, writing the client is not my responsibility will potentially have very high load/traffic logic of webservice will later be massively expanded for another product which is basically a superset of the functionality of the current project there is a significant likelihood that at some point an API should be exposed which can be used by 3rd party developers - obviously, with some restrictions at some point, the public view of the product should become accessible via smartphones, too (in other words, maybe a customized version of the site to adapt to the smaller display and different input methods)

    Read the article

  • Acessing a wordpress database from an iPhone App.

    - by Code
    Hi guys, I've been asked to create an app that will get data back from a database where the CMS will be Wordpress. I've never used a CMS so I'm trying to get a (overview)picture in my head of how it could all work and what each of the components would be. And what a CMS actually brings to the party. Creating the app itself is pretty clear. I've done a few already. I've made a database before and shouldnt cause a problem. But what is going to be in the middle between the app and the database? Part A: I'm guessing iphone apps typically would call some php file that's hosted on the server? The php then would make a call to the database and return the data somehow, maybe as xml. But this is really basic and wouldnt require a CMS. Just a database and a phpfile, or am I wrong? Part B: If i wanted to run a check on the database every minute to see if any of the data in database was no longer valid and remove it if needed, that would require somekind of program running on the server. So that program would be Wordpress, since it is managing the content, so a content management system is actually needed and is for these kind of taskes. Am i understanding the role of CMS? Many Thanks, -Code

    Read the article

  • Using git (or some other VCS) at your company

    - by supercheetah
    Some friends of mine and I were talking recently about version control, and how they were using VSS at their jobs, and were probably going to be moving off of that soon. One of them said that his company will likely be going with Team Foundation Server. Eventually, the conversation did get around to talking about some of the open source VCSes out there, including git and SVN. None of us really knew about any companies that use either of these internally, although we imagined that a number of them did so for SVN, but we weren't too sure about git. I brought up Google and Android using it, but my friend figured that's only for the public facing source code, and that they may use something different for internal projects. Apparently it's more than just SCM that makes TFS so intriguing: Microsoft Sales people and support (although my friend did point out somethings to his managers that he thought might be misleading on MS' part) Integration of things beyond SCM, including project management (I'm just finding out that there are geared towards the same things for git) Again, it's Microsoft, and the transition from VSS to TFS seems logical (or does it?) I'm not much of a fan of SVN, so I didn't really bring it up much, but I am curious about whether or not git is used at your company for internal projects. Have you thought about it, and decided against it? Any reason why?

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • What is an appropriate way to separate lifecycle events in the logging system?

    - by Hanno Fietz
    I have an application with many different parts, it runs on OSGi, so there's the bundle lifecycles, there's a number of message processors and plugin components that all can die, can be started and stopped, have their setup changed etc. I want a way to get a good picture of the current system status, what components are up, which have problems, how long they have been running for etc. I think that logging, especially in combination with custom appenders (I'm using log4j), is a good part of the solution and does help ad-hoc analysis as well as live monitoring. Normally, I would classify lifecycle events as INFO level, but what I really want is to have them separate from what else is going on in INFO. I could create my own level, LIFECYCLE. The lifecycle events happen in various different areas and on various levels in the application hierarchy, also they happen in the same areas as other events that I want to separate them from. I could introduce some common lifecycle management and use that to distinguish the events from others. For instance, all components that have a lifecycle could implement a particular interface and I log by its name. Are there good examples of how this is done elsewhere? What are considerations?

    Read the article

  • Login fails when recreating database with Code First

    - by Mun
    I'm using ASP.NET Entity Framework's Code First to create my database from the model, and the login seems to fail when the database needs to be recreated after the model changes. In Global.asax, I've got the following: protected void Application_Start() { Database.SetInitializer(new DropCreateDatabaseIfModelChanges<EntriesContext>()); // ... } In my controller, I've got the following: public ActionResult Index() { // This is just to force the database to be created var context = new EntriesContext(); var all = (from e in context.Entries select e).ToList(); } When the database doesn't exist, it is created with no problems. However, when I make a change to the model, rebuild and refresh, I get the following error: Login failed for user 'sa'. My connection string looks like this: <add name="EntriesContext" connectionString="Server=(LOCAL);Database=MyDB;User Id=sa;Password=password" providerName="System.Data.SqlClient" /> The login definitely works as I can connect to the server and the database from Management Studio using these credentials. If I delete the database manually, everything works correctly and the database is recreated as expected with the schema reflecting the changes made to the model. It seems like either the password or access to the database is being lost. Is there something else I need to do to get this working?

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >