Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 804/924 | < Previous Page | 800 801 802 803 804 805 806 807 808 809 810 811  | Next Page >

  • a package for kruskal-wallis that shows pairwise comparison details

    - by dalloliogm
    The standard stats::kruskal.test module allows to calculate the kruskal-wallis test on a dataset: >>> data(diamonds) >>> kruskal.test.test(price~carat, data=diamonds) Kruskal-Wallis rank sum test data: price by carat by color Kruskal-Wallis chi-squared = 50570.15, df = 272, p-value < 2.2e-16 this is fine, it is giving me the probability that all the groups in the data have the same mean. However, I would like to have the details per each pair comparison, like if diamonds of colors D and E have the same mean price, as some other softwares (SPSS) do when you ask for a Kruskal test. I have found kruskalmc from the package pgirmess which allows me to do what I want to do: > kruskalmc(diamonds$price, diamonds$color) Multiple comparison test after Kruskal-Wallis p.value: 0.05 Comparisons obs.dif critical.dif difference D-E 571.7459 747.4962 FALSE D-F 2237.4309 751.5684 TRUE D-G 2643.1778 726.9854 TRUE D-H 4539.4392 774.4809 TRUE D-I 6002.6286 862.0150 TRUE D-J 8077.2871 1061.7451 TRUE E-F 2809.1767 680.4144 TRUE E-G 3214.9237 653.1587 TRUE E-H 5111.1851 705.6410 TRUE E-I 6574.3744 800.7362 TRUE E-J 8649.0330 1012.6260 TRUE F-G 405.7470 657.8152 FALSE F-H 2302.0083 709.9533 TRUE F-I 3765.1977 804.5390 TRUE F-J 5839.8562 1015.6357 TRUE G-H 1896.2614 683.8760 TRUE G-I 3359.4507 781.6237 TRUE G-J 5434.1093 997.5813 TRUE H-I 1463.1894 825.9834 TRUE H-J 3537.8479 1032.7058 TRUE I-J 2074.6585 1099.8776 TRUE However, this package only allows for one categoric variable (e.g. I can't study the prices clustered by color and by carat, as I can do with kruskal.test), and I don't know anything about the pgirmess package, whether it is maintained or not, or if it is tested.

    Read the article

  • Using Zend Framework and Doctrine with independend modular structures

    - by stefax
    I've seen a lot of articles about integrating ZF and Doctrine. There is also a proposal for ZF here but they have always two possible structures. Either they put all models into one top level model directory or they put it into a module related model directory. application |-- Bootstrap.php |-- configs |-- controllers |-- models - EITHER HERE |-- modules | -- examplemodule | |-- controllers | |-- models - OR HERE | |-- views |-- views For our projects I see problems for either of the two options: 1. One directory: application/models - in a complex system after a short time there will be hundreds of files, over all when you have the table classes two (e.g. User.php and UserTable.php). 2. Module based model directories: application/modules/examplemodule/models - in many cases we use models in multiple modules at the same time. So the "User" is required e.g. in the modules "game", "administration", ... Is there a way to use some kind of sub directories under the top level directory "models" to get some grouping. It should be completely independent of the module structure. application |-- Bootstrap.php ... |-- models | -- user | |-- User.php | |-- Friend.php | |-- other user related models | -- game | |-- Game.php | |-- Score.php | |-- ... ... Any solution should support autoloading and the class generation from yaml files. Any ideas, links or solutions? Thanks!

    Read the article

  • How to setup a Zend_Application with an application.ini and a user.ini

    - by Peter Smit
    I am using Zend_Application and it does not feel right that I am mixing in my application.ini both application and user configuration. What I mean with this is the following. For example, my application needs some library classes in the namespace MyApp_ . So in application.ini I put autoloaderNamespaces[] = "MyApp_". This is pure application configuration, no-one except a programmer would change these. On the other hand I put there a database configuration, something that a SysAdmin would change. My idea is that I would split options between an application.ini and an user.ini, where the options in user.ini take preference (so I can define standard values in application.ini). Is this a good idea? How can I best implement this? The idea's I have are Extending Zend_Application to take multiple config files Making an init function in my Bootstrap loading the user.ini Parsing the config files in my index.php and pass these to Zend_Application (sounds ugly) What shall I do? I would like to have the 'cleanest' solution, which is prepared for the future (newer ZF versions, and other developers working on the same app)

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • What framework would allow for the largest coverage of freelance developers in the media/digital mar

    - by optician
    This question is not about which is the best, it is about which makes the most business sense to use as a company's platform of choice for ongoing freelance development. I'm currently trying to decide what framework to move my company in regarding frameworks for web application work. Options are ASP.NET MVC Django CakePHP/Symfony etc.. Struts Pearl on Rails Please feel free to add more to the discussion. I currently work in ASP.NET MVC in my Spare time, and find it incredibly enjoyable to work with. It is my first experince with an MVC framework for the web, so I can't talk on the others. The reason for not pushing this at the company is that I feel that there are not many developers in the Media/Marketing world who would work with this, so it may be hard to extend the team, or at least cost more. I would like to move into learning and pushing Django, partly to learn python, partly to feel a bit cooler (all my geeky friends use Java/Python/c++). Microsoft is the dark side to most company's I work with (Marketing/Media focused). But again I'm worried about developers in this sector. PHP seems like the natural choice, but I'm scared by the sheer amount of possible frameworks, and also that the quality of developer may be lower. I know there are great php developers out there, but how many of them know multiple frameworks? Are they similar enough that anyone decent at php can pick them up? Just put struts in the list as an option, but personally I live with a Java developer, and considering my experience with c#, I'm just not that interested in learning Java (selfish personal geeky reasons) Final option was a joke http://www.bbc.co.uk/blogs/radiolabs/2007/11/perl_on_rails.shtml

    Read the article

  • SSIS: "Failure inserting into the read-only column <ColumnName>"

    - by Cory
    I have an Excel source going into an OLE DB destination. I'm inserting data into a view that has an INSTEAD OF trigger that handles all inserts. When I try to execute the package I receive this error: "Failure inserting into the read-only column ColumnName" What can I do to let SSIS know that this view is safe to insert into because there is an INSTEAD OF trigger that will handle the insert? EDIT (Additional info): Some more additional info. I have a flat file that is being inserted into a normalized database. My initial problem was how do I take a flat file and insert that data into multiple tables while keeping track of all the primary/foreign key relationships. My solution was to create a VIEW that mimicked the structure of the flat file and then create an INSTEAD OF trigger on that view. In my INSTEAD OF trigger I would handle the logic of maintaining all the relationships between tables My view looks something like this. CREATE VIEW ImportView AS SELECT CONVERT(varchar(100, NULL) AS CustomerName, CONVERT(varchar(100), NULL) AS Address1, CONVERT(varchar(100), NULL) AS Address2, CONVERT(varchar(100), NULL) AS City, CONVERT(char(2), NULL) AS State, CONVERT(varchar(250), NULL) AS ItemOrdered, CONVERT(int, NULL) AS QuantityOrdered ... I will never need to select from this view, I only use it to insert data into it from this flat file I receive. I need someway to tell SQL Server that the fields aren't really read only because there is an INSTEAD OF trigger on this view.

    Read the article

  • Ignoring a model with all blank fields in Ruby on Rails

    - by aguynamedloren
    I am trying to create multiple items (each with a name value and a content value) in a single form. The code I have is functioning, but I cannot figure out how to ignore items that are blank. Here's the code: #item.rb class Item < ActiveRecord::Base attr_accessible :name, :content validates_presence_of :name, :content end #items_controller.rb class ItemsController < ApplicationController def new @items = Array.new(3){ Item.new } end def create @items = params[:items].values.collect{|item|Item.new(item)} if @items.each(&:save!) flash[:notice] = "Successfully created item." redirect_to root_url else render :action => 'new' end end #new.html.erb <% form_tag :action => 'create' do %> <%@items.each_with_index do |item, index| %> <% fields_for "items[#{index}]", item do |f| %> <p> Name: <%= f.text_field :name %> Content: <%= f.text_field :content %> </p> <% end %> <% end %> <%= submit_tag %> <% end %> This code works when all fields for all items are filled out in the form, but fails if any fields are left blank (due to validations). The goal is that 1 or 2 items could be saved, even if others are left blank. I'm sure there is a simple solution to this, but I've been tinkering for hours with no avail. Any help is appreciated!

    Read the article

  • Building Reducisaurus URLs

    - by Alix Axel
    I'm trying to use Reducisaurus Web Service to minify CSS and Javascript but I've run into a problem... Suppose I've two unminified CSS at: http:/domain.com/dynamic/styles/theme.php?color=red http:/domain.com/dynamic/styles/typography.php?font=Arial According to the docs I should call the web service like this: http:/reducisaurus.appspot.com/css?url=http:/domain.com/dynamic/styles/theme.php?color=red And if I want to minify both CSS files at once: http:/reducisaurus.appspot.com/css?url1=http:/domain.com/dynamic/styles/theme.php?color=red&url2=http:/domain.com/dynamic/styles/theme.php?color=red If I wanted to specify a different number of seconds for the cache (3600 for instance) I would use: http:/reducisaurus.appspot.com/css?url=http:/domain.com/dynamic/styles/theme.php?color=red&expire_urls=3600 And again for both CSS files at once: http:/reducisaurus.appspot.com/css?url1=http:/domain.com/dynamic/styles/theme.php?color=red&url2=http:/domain.com/dynamic/styles/theme.php?color=red&expire_urls=3600 Now my question is, how does Reducisaurus knows how to separate the URLs I want? How does it know that &expire_urls=3600 is not part of my URL? And how does it know that &url2=... is not a GET argument of url1? I'm I doing this right? Do I need to urlencode my URLs? I took a peek into the source code and although my Java is very poor it seems that the methods acquireFromRemoteUrl() and getSortedParameterNames() from the BaseServlet.java file hold the answers to my question - if a GET argument name contains - or _ they should be ignored?! What about multiple &url(n)s?

    Read the article

  • Applets failing to load

    - by Roy Tang
    While testing our setup for user acceptance testing, we got some reports that java applets in our web application would occasionally fail to load. The envt where it was reported was WinXP/IE6, and there were no errors found in the java console. Obviously we'd like to avoid it. What sort of things should we be checking for here? On our local servers, everything seems fine. There's some turnaround time when sending questions to the on-site guy, so I'd look to cover as many possible causes as possible. Some more info: We have multiple applets, in the instance that they fail loading, all of them fail loading. The applet jar files vary in size from 2MB to 8MB. I'm told it seems more likely to happen if the applet isn't cached yet, i.e. if they've been able to load the applets once on a given machine, further runs on that machine go smoothly. I'm wondering if there's some sort of network transfer error when downloading the applets, but I don't know how to verify that. Any advise is welcome!

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • python histogram one-liner

    - by mykhal
    there are many ways, how to code histogram in Python. by histogram, i mean function, counting objects in an interable, resulting in the count table (i.e. dict). e.g.: >>> L = 'abracadabra' >>> histogram(L) {'a': 5, 'b': 2, 'c': 1, 'd': 1, 'r': 2} it can be written like this: def histogram(L): d = {} for x in L: if x in d: d[x] += 1 else: d[x] = 1 return d ..however, there are much less ways, how do this in a single expression. if we had "dict comprehensions" in python, we would write: >>> { x: L.count(x) for x in set(L) } but we don't have them, so we have to write: >>> dict([(x, L.count(x)) for x in set(L)]) however, this approach may yet be readable, but is not efficient - L is walked-through multiple times, so this won't work for single-life generators.. the function should iterate well also through gen(), where: def gen(): for x in L: yield x we can go with reduce (R.I.P.): >>> reduce(lambda d,x: dict(d, x=d.get(x,0)+1), L, {}) # wrong! oops, does not work, the key name is 'x', not x :( i ended with: >>> reduce(lambda d,x: dict(d.items() + [(x, d.get(x, 0)+1)]), L, {}) (in py3k, we would have to write list(d.items()) instead of d.items(), but it's hypothethical, since there is no reduce there) please beat me with a better one-liner, more readable! ;)

    Read the article

  • Is it possible to alternate between two types in an XML Schema

    - by lief79
    I'm wondering if I am missing something obvious. I have a list of numbers, possibly including ranges (think of the print page option of a print dialog). Ideally I'd like to have the final XML output look like this, with page and pageRange in an order. <pages> <page> 1</page> <pageRange><start>3</start><end>6</end></pageRange> <page> 34</page> </pages> What would I have to put into a schema to allow this? From what I saw: Sequence with multiple page and page Range allowed doesn't permit alternation. Choice only allows one or the other. I tried messing with all, but I wasn't getting it to validate properly. My short term solution is to have a sequence of ranges, and to force single numbers into the range, but it seems potentially cumbersome. So am I missing something?

    Read the article

  • Excel CSV into Nested Dictionary; List Comprehensions

    - by victorhooi
    heya, I have a Excel CSV files with employee records in them. Something like this: mail,first_name,surname,employee_id,manager_id,telephone_number [email protected],john,smith,503422,503423,+65(2)3423-2433 [email protected],george,brown,503097,503098,+65(2)3423-9782 .... I'm using DictReader to put this into a nested dictionary: import csv gd_extract = csv.DictReader(open('filename 20100331 original.csv'), dialect='excel') employees = dict([(row['employee_id'], row) for row in gp_extract]) Is the above the proper way to do it - it does work, but is it the Right Way? Something more efficient? Also, the funny thing is, in IDLE, if I try to print out "employees" at the shell, it seems to cause IDLE to crash (there's approximately 1051 rows). 2. Remove employee_id from inner dict The second issue issue, I'm putting it into a dictionary indexed by employee_id, with the value as a nested dictionary of all the values - however, employee_id is also a key:value inside the nested dictionary, which is a bit redundant? Is there any way to exclude it from the inner dictionary? 3. Manipulate data in comprehension Thirdly, we need do some manipulations to the imported data - for example, all the phone numbers are in the wrong format, so we need to do some regex there. Also, we need to convert manager_id to an actual manager's name, and their email address. Most managers are in the same file, while others are in an external_contractors CSV, which is similar but not quite the same format - I can import that to a separate dict though. Are these two items things that can be done within the single list comprehension, or should I use a for loop? Or does multiple comprehensions work? (sample code would be really awesome here). Or is there a smarter way in Python do it? Cheers, Victor

    Read the article

  • Retrieve a list of the most popular GET param variations for a given URL?

    - by jamtoday
    I'm working on building intelligence around link propagation, and because I need to deal with many short URL services where a reverse-lookup from an exact URL address is required, I need to be able to resolve multiple approximate versions of the same URL. An example would be a URL like http://www.example.com?ref=affil&hl=en&ct=0 Of course, changing GET params in certain circumstances can refer to a completely different page, especially if the GET params in question refer to a profile or content ID. But a quick parse of the page would quickly determine how similar the pages were to each other. Using a bit of machine learning, it could quickly become clear which GET params don't effect the content of the pages returned for a given site. I'm assuming a service to send a URL and get a list of very similar URLs could only be offered by the likes of Google or Yahoo (or Twitter), but they don't seem to offer this feature, and I haven't found any other services that do. If you know of any services that do cluster together groups of almost identical URLs in the aforementioned way, please let me know. My bounty is a hug.

    Read the article

  • How do I recover from pushing a gitosis.conf file with parsing errors due to line breaks?

    - by Kasia
    I have successfully set up gitosis for an Android mirror (containing multiple git repositories). While adding a new .git path following writable= in gitosis.conf I managed to insert a few line breaks. Saved, committed and pushed to server when I received the following parsing error: Traceback (most recent call last): File "/usr/bin/gitosis-run-hook", line 8, in load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-run-hook')() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 24, in run return app.main() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 38, in main self.handle_args(parser, cfg, options, args) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 75, in handle_args post_update(cfg, git_dir) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 33, in post_update cfg.read(os.path.join(export, '..', 'gitosis.conf')) File "/usr/lib/python2.5/ConfigParser.py", line 267, in read self._read(fp, filename) File "/usr/lib/python2.5/ConfigParser.py", line 490, in _read raise e ConfigParser.ParsingError: File contains parsing errors: ./gitosis-export/../gitosis.conf (...) I have removed the line break and amendend the commit by git commit -m "fix linebreak" --amend However git push still yields the exact same error. It leads me to believe gitosis is preventing me from doing any further pushes. How do I recover from this?

    Read the article

  • Getting the highest owner of a ToolStripDropDownItem

    - by andyperfect
    I'm currently working on a project in which at one point, the user may right click a button which brings up a contextMenuStrip. I am already able to find the owner accurately from that strip, and manipulate the button clicked as follows: Dim myItem As ToolStripMenuItem = CType(sender, ToolStripMenuItem) Dim cms As ContextMenuStrip = CType(myItem.Owner, ContextMenuStrip) Dim buttonPressed As DataButton = DirectCast(cms.SourceControl, DataButton) But now for the tricky part. Within this contextmenuStrip, I have a DropDown menu with multiple items in there. I would assume you would be able to work your way up the ladder doing casts like above in the manner of ToolStripDrowpDownItem > ToolStripDropDownMenu > ToolStripMenuItem > ContextMenuStrip Unfortunately, when I try to get the sourcecontrol from this menuStrip, it return Nothing. Any ideas on how I can get the button that was pressed from this toolStripMenuItem? My current code is as follows (in which the sourceControl is Nothing) Dim myItem As ToolStripDropDownItem = CType(sender, ToolStripDropDownItem) Dim dropDown As ToolStripDropDownMenu = CType(myItem.Owner, ToolStripDropDownMenu) Dim menuItem As ToolStripMenuItem = CType(dropDown.OwnerItem, ToolStripMenuItem) Dim cms As ContextMenuStrip = CType(menuItem.Owner, ContextMenuStrip) Dim buttonPressed As DataButton = DirectCast(cms.SourceControl, DataButton) Any ideas on how to go about doing what I did in that first method, but just working my way up from further down the ladder?

    Read the article

  • Alter Table Filestream runs even if IF statement is false

    - by Allison
    I am writing a script to update a database to add Filestream capability. The script needs to be able to be run multiple times without erroring. This is what I currently have IF ((select count(*) from sys.columns a inner join sys.objects b on a.object_id = b.object_id inner join sys.default_constraints c on c.parent_object_id = a.object_id where a.name = 'evidence_data' and b.name='evidence' and c.name='DF__evidence_evidence_data') = 0) begin ALTER TABLE evidence SET ( FILESTREAM_ON = AnalysisFSGroup ) ALTER TABLE evidence ALTER COLUMN id ADD ROWGUIDCOL; end GO The first time I run this against the database it works fine. The second time when the if statement should be false it throws an error saying "Cannot add FILESTREAM filegroup or partition scheme since table 'evidence' has a FILESTREAM filegroup or partition scheme already." If I put a simple select into the if statement and take out the alter table filestream on line it functions correctly and does not perform the if statement. So esentially it is always running the alter table filestream on statement even if the if statement is false. Any thoughts or suggestions would be great. Thanks.

    Read the article

  • How to properly develop and deploy features for existing asp.net applications on IIS

    - by Tomh
    My question actually consists of multiple questions. I'm frequently reading about companies who deploy a small subset of features for a select amount of customers using the live "database". Ruby on Rails and its ecosystem have deployment tools and database migrations to deploy or rollback such features in a live production or staging environment. My question, how is this done for an asp.net (mvc in particular) application? How do you test your newly released features against live data? Do you have any tools to modify the existing database and roll back changes if necessary? Do you make backups before deployment? Update Maybe I should point out that my question is not really clear, getting more answers here will help me phrase the question better. To make it easier I will describe a situation I'm commonly seeing with some of my clients. My clients have large deployments of popular web applications. They do not have staging/QA/testing servers. (yes this is not optimal). The data their apps consist of are images, xml files, user uploads and data in Sql Server. Having a few records, of their production database and a couple of dummy files is not a substitute of testing against real data in my opinion. How would you design a workflow that can create a acceptable environment to mimic a production environment before going live?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Merge Word Documents (Office Interop & .NET), Keeping Formatting

    - by mbmccormick
    I'm having some difficulty merging multiple word documents together using Microsoft Office Interop Assemblies (Office 2007) and ASP.NET 3.5. I'm able to merge the documents, but some of my formatting is missing (namely the fonts and images). My current merge code is shown below. private void CombineDocuments() { object wdPageBreak = 7; object wdStory = 6; object oMissing = System.Reflection.Missing.Value; object oFalse = false; object oTrue = true; string fileDirectory = @"C:\documents\"; Microsoft.Office.Interop.Word.Application WordApp = new Microsoft.Office.Interop.Word.Application(); Microsoft.Office.Interop.Word.Document wDoc = WordApp.Documents.Add(ref oMissing, ref oMissing, ref oMissing, ref oMissing); string[] wordFiles = Directory.GetFiles(fileDirectory, "*.doc"); for (int i = 0; i < wordFiles.Length; i++) { string file = wordFiles[i]; wDoc.Application.Selection.Range.InsertFile(file, ref oMissing, ref oMissing, ref oMissing, ref oFalse); wDoc.Application.Selection.Range.InsertBreak(ref wdPageBreak); wDoc.Application.Selection.EndKey(ref wdStory, ref oMissing); } string combineDocName = Path.Combine(fileDirectory, "Merged Document.doc"); if (File.Exists(combineDocName)) File.Delete(combineDocName); object combineDocNameObj = combineDocName; wDoc.SaveAs(ref combineDocNameObj, ref m_WordDocumentType, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing); } I don't care necessarily how this is accomplished. It could output via PDF if it had to. I just want the formatting to carry over. Any help or hints that you could provide me with would be appreciated! Thanks!

    Read the article

  • Nested factory methods in Objective-C

    - by StephenT
    What's the best way to handle memory management with nested factory methods, such as in the following example? @implementation MyClass + (MyClass *) SpecialCase1 { return [MyClass myClassWithArg:1]; } + (MyClass *) SpecialCase2 { return [MyClass myClassWithArg:2]; } + (MyClass *) myClassWithArg:(int)arg { MyClass *instance = [[[MyClass alloc] initWithArg:arg] autorelease]; return instance; } - (id) initWithArg:(int)arg { self = [super init]; if (nil != self) { self.arg = arg; } return self; } @end The problem here (I think) is that the autorelease pool is flushed before the SpecialCaseN methods return to their callers. Hence, the ultimate caller of SpecialCaseN can't rely on the result having been retained. (I get "[MyClass copyWithZone:]: unrecognized selector sent to instance 0x100110250" on trying to assign the result of [MyClass SpecialCase1] to a property on another object.) The reason for wanting the SpecialCaseN factory methods is that in my actual project, there are multiple parameters required to initialize the instance and I have a pre-defined list of "model" instances that I'd like to be able to create easily. I'm sure there's a better approach than this.

    Read the article

  • Why doesen't it work to write this NSMutableArray to a plist?

    - by Emil
    edited. Hey, I am trying to write an NSMutableArray to a plist. The compiler does not show any errors, but it does not write to the plist anyway. I have tried this on a real device too, not just the Simulator. Basically, what this code does, is that when you click the accessoryView of a UITableViewCell, it gets the indexPath pressed, edits an NSMutableArray and tries to write that NSMutableArray to a plist. It then reloads the arrays mentioned (from multiple plists) and reloads the data in a UITableView from the arrays. Code: NSIndexPath *indexPath = [table indexPathForRowAtPoint:[[[event touchesForView:sender] anyObject] locationInView:table]]; [arrayFav removeObjectAtIndex:[arrayFav indexOfObject:[NSNumber numberWithInt:[[arraySub objectAtIndex:indexPath.row] intValue]]]]; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"arrayFav.plist"]; NSLog(@"%@ - %@", rootPath, plistPath); [arrayFav writeToFile:plistPath atomically:YES]; // Reloads data into the arrays [self loadDataFromPlists]; // Reloads data in tableView from arrays [tableFarts reloadData];

    Read the article

  • Simplepie - fetch feeds from database

    - by krike
    I want to fetch multiple feeds from the database and in so doing fetch all new content from those feeds it works but there is a problem and I have no idea what's causing it, this is the code: $feed_sql = mysqli_query($link, "SELECT feed from tutorial_feed WHERE approved=1"); $feeds = array(); $i = 0; while($feed_r = mysqli_fetch_object($feed_sql)): $feeds[$i] .= $feed_r->feed; $i++; endwhile; $feed = new SimplePie($feeds); $feed-handle_content_type(); foreach($feed-get_items(0, 100) as $item) : echo $item-get_permalink().""; endforeach; I first get Notice: Undefined offset: 0 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 1 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 2 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 3 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 4 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 5 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 6 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 7 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 8 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 9 in I:\wamp\www\cmstut\includes\cron.php on line 22 Notice: Undefined offset: 10 in I:\wamp\www\cmstut\includes\cron.php on line 22 and then it will start printing the permalinks to the new content based on the imported feeds, I know undefined offset means it does not exist but I don't get it, any help would be appreciated

    Read the article

  • RESTful design, how to name pages outside CRUD et al?

    - by sscirrus
    Hi all, I'm working on a site that has quite a few pages that fall outside my limited understanding of RESTful design, which is essentially: Create, Read, Update, Delete, Show, List Here's the question: what is a good system for labeling actions/routes when a page doesn't neatly fall into CRUD/show/list? Some of my pages have info about multiple tables at once. I am building a site that gives some customers a 'home base' after they log on. It does NOT give them any information about themselves so it shouldn't be, for example, /customers/show/1. It does have information about companies, but there are other pages on the site that do that differently. What do you do when you have these situations? This 'home-base' is shown to customers and it mainly has info about companies (but not uniquely so). Second case: I have a table called 'Matchings' in between customers and companies. These matchings are accessed in completely different ways on different parts of the site (different layouts, different CSS sheets, different types of users accessing them, etc. They can't ALL be matchings/show. What's the best way to label the others? Thanks very much. =)

    Read the article

< Previous Page | 800 801 802 803 804 805 806 807 808 809 810 811  | Next Page >