Search Results

Search found 8875 results on 355 pages for 'mime types'.

Page 296/355 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Quick and easy flood protection?

    - by James P
    I have a site where a user submits a message using AJAX to a file called like.php. In this file the users message is submitted to a database and it then sends a link back to the user. In my Javascript code I disabled the text box the user types into when they submit the AJAX request. The only problem is, a malicious user can just constantly send POST requests to like.php and flood my database. So I would like to implement simple flood protection. I don't really want the hassle of another database table logging users IPs and such... as if they are flooding my site there will be a lot of database read/writes slowing it down. I thought about using sessions, like have a session that contains a timestamp that gets checked every time they send data to like.php, and if the current time is before the timestamp let them add data to the database, otherwise send out an error and block them. If they are allowed to enter something into the database, update their session with a new timestamp. What do you think? Would this be the best way to go about it or are there easier alternatives? Thanks for any help. :)

    Read the article

  • Test the sequentiality of a column with a single SQL query

    - by LauriE
    Hey, I have a table that contains sets of sequential datasets, like that: ID set_ID some_column n 1 'set-1' 'aaaaaaaaaa' 1 2 'set-1' 'bbbbbbbbbb' 2 3 'set-1' 'cccccccccc' 3 4 'set-2' 'dddddddddd' 1 5 'set-2' 'eeeeeeeeee' 2 6 'set-3' 'ffffffffff' 2 7 'set-3' 'gggggggggg' 1 At the end of a transaction that makes several types of modifications to those rows, I would like to ensure that within a single set, all the values of "n" are still sequential (rollback otherwise). They do not need to be in the same order according to the PK, just sequential, like 1-2-3 or 3-1-2, but not like 1-3-4. Due to the fact that there might be thousands of rows within a single set I would prefer to do it in the db to avoid the overhead of fetching the data just for verification after making some small changes. Also there is the issue of concurrency. The way locking in InnoDB (repeatable read) works (as I understand) is that if I have an index on "n" then InnoDB also locks the "gaps" between values. If I combine set_ID and n to a single index, would that eliminate the problem of phantom rows appearing? Looks to me like a common problem. Any brilliant ideas? Thanks! Note: using MySQL + InnoDB

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • Using Generics to return a literal string or from Dictionary<string, object>

    - by Mike
    I think I outsmarted myself this time. Feel free to edit the title also I could not think of a good one. I am reading from a file and then in that file will be a string because its like an xml file. But in the file will be a literal value or a "command" to get the value from the workContainer so <Email>[email protected]</Email> or <Email>[? MyEmail ?]</Email> What I wanted to do instead of writing ifs all over the place to put it in a generic function so logic is If Container command grab from container else grab string and convert to desired type Its up to the user to ensure the file is ok and the type is correct so another example is so <Answer>3</Answer> or <Answer>[? NumberOfSales ?]</Answer> This is the procedure I started to work on public class WorkContainer:Dictionary<string, object> { public T GetKeyValue<T>(string Parameter) { if (Parameter.StartsWith("[? ")) { string key = Parameter.Replace("[? ", "").Replace(" ?]", ""); if (this.ContainsKey(key)) { return (T)this[key]; } else { // may throw error for value types return default(T); } } else { // Does not Compile if (typeof(T) is string) { return Parameter } // OR return (T)Parameter } } } The Call would be mail.To = container.GetKeyValue<string>("[email protected]"); or mail.To = container.GetKeyValue<string>("[? MyEmail ?]"); int answer = container.GetKeyValue<int>("3"); or answer = container.GetKeyValue<int>("[? NumberOfSales ?]"); But it does not compile?

    Read the article

  • How to use a TFileStream to read 2D matrices into dynamic array?

    - by Robert Frank
    I need to read a large (2000x2000) matrix of binary data from a file into a dynamic array with Delphi 2010. I don't know the dimensions until run-time. I've never read raw data like this, and don't know IEEE so I'm posting this to see if I'm on track. I plan to use a TFileStream to read one row at a time. I need to be able to read as many of these formats as possible: 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point For 32-bit two's complement, I'm thinking something like the code below. Changing to Int64 and Int16 should be straight forward. How can I read the IEEE? Am I on the right track? Any suggestions on this code, or how to elegantly extend it for all 4 data types above? Since my post-processing will be the same after reading this data, I guess I'll have to copy the matrix into a common format when done. I have no problem just having four procedures (one for each data type) like the one below, but perhaps there's an elegant way to use RTTI or buffers and then move()'s so that the same code works for all 4 datatypes? Thanks! type TRowData = array of Int32; procedure ReadMatrix; var Matrix: array of TRowData; NumberOfRows: Cardinal; NumberOfCols: Cardinal; CurRow: Integer; begin NumberOfRows := 20; // not known until run time NumberOfCols := 100; // not known until run time SetLength(Matrix, NumberOfRows); for CurRow := 0 to NumberOfRows do begin SetLength(Matrix[CurRow], NumberOfCols); FileStream.ReadBuffer(Matrix[CurRow], NumberOfCols * SizeOf(Int32)) ); end; end;

    Read the article

  • [nHibernate] casting string to bool using nHibernate Criteria

    - by code-zoop
    I have an nHibernate query using Criteria, and I am trying to cast a string to bool in the query itself. I have done the same with casting a string to int, and that works well (the "DataField" property is "1" as a string): var result = Session .CreateCriteria<Car>() .Add(Restrictions.Eq((Projections.Cast(NHibernateUtil.Int32, Projections.Property("DataField"), 1)) .List<Car>(); tx.Commit(); But I am trying to do the same with bool, but I do not get the expected result: var result = Session .CreateCriteria<Car>() .Add(Restrictions.Eq((Projections.Cast(NHibernateUtil.bool, Projections.Property("DataField"), true)) .List<Car>(); tx.Commit(); "DataField" is the string "True", but the result in an empty list, where it should contain 100 elements with the "DataField" property string set to "True". I have tried with the string "true", and "1", but the result is still an empty List. [EDIT] As Commented below, I could check for the string "True" or "False", but I would say this is a more general question than just for the Boolean. Note that the idea is to have some sort of key value representation of the data, where the value can be different data types. I need the value table to contain all data, so storing the data as string seems like the cleanest solution! I have been able to use the method above to store both int and double as string, and to the cast in the query, but I have not succeeded using the same method for DateDime and Boolean. And for DateTime it is crucial to have the actual DateTime object. How can I make the cast from string to bool, and string to DateTime work in the queries? Thanks

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • Checking if a file is a directory or just a file.

    - by Jookia
    I'm writing a program to check if something is a file or is a directory. Is there a better way to do it than this? #include <stdio.h> #include <sys/types.h> #include <dirent.h> #include <errno.h> int isFile(const char* name) { DIR* directory = opendir(name); if(directory != NULL) { closedir(directory); return 0; } if(errno == ENOTDIR) { return 1; } return -1; } int main(void) { const char* file = "./testFile"; const char* directory = "./"; printf("Is %s a file? %s.\n", file, ((isFile(file) == 1) ? "Yes" : "No")); printf("Is %s a directory? %s.\n", directory, ((isFile(directory) == 0) ? "Yes" : "No")); return 0; }

    Read the article

  • Problem with incomplete input when using Attoparsec

    - by Dan Dyer
    I am converting some functioning Haskell code that uses Parsec to instead use Attoparsec in the hope of getting better performance. I have made the changes and everything compiles but my parser does not work correctly. I am parsing a file that consists of various record types, one per line. Each of my individual functions for parsing a record or comment works correctly but when I try to write a function to compile a sequence of records the parser always returns a partial result because it is expecting more input. These are the two main variations that I've tried. Both have the same problem. items :: Parser [Item] items = sepBy (comment <|> recordType1 <|> recordType2) endOfLine For this second one I changed the record/comment parsers to consume the end-of-line characters. items :: Parser [Item] items = manyTill (comment <|> recordType1 <|> recordType2) endOfInput Is there anything wrong with my approach? Is there some other way to achieve what I am attempting?

    Read the article

  • MS Exam 70-536 - How to throw and handle exception from thread?

    - by Max Gontar
    Hello! In MS Exam 70-536 .Net Foundation, Chapter 7 "Threading" in Lesson 1 Creating Threads there is a text: Be aware that because the WorkWithParameter method takes an object, Thread.Start could be called with any object instead of the string it expects. Being careful in choosing your starting method for a thread to deal with unknown types is crucial to good threading code. Instead of blindly casting the method parameter into our string, it is a better practice to test the type of the object, as shown in the following example: ' VB Dim info As String = o as String If info Is Nothing Then Throw InvalidProgramException("Parameter for thread must be a string") End If // C# string info = o as string; if (info == null) { throw InvalidProgramException("Parameter for thread must be a string"); } So, I've tried this but exception is not handled properly (no console exception entry, program is terminated), what is wrong with my code (below)? class Program { static void Main(string[] args) { Thread thread = new Thread(SomeWork); try { thread.Start(null); thread.Join(); } catch (InvalidProgramException ex) { Console.WriteLine(ex.Message); } finally { Console.ReadKey(); } } private static void SomeWork(Object o) { String value = (String)o; if (value == null) { throw new InvalidProgramException("Parameter for "+ "thread must be a string"); } } } Thanks for your time!

    Read the article

  • How to specify generic method type parameters partly

    - by DNNX
    I have an extension method like below: public static T GetValueAs<T, R>(this IDictionary<string, R> dictionary, string fieldName) where T : R { R value; if (!dictionary.TryGetValue(fieldName, out value)) return default(T); return (T)value; } Currently, I can use it in the following way: var dictionary = new Dictionary<string, object(); //... var list = dictionary.GetValueAs<List<int, object("A"); // this may throw ClassCastException - this is expected behavior; It works pretty fine, but the second type parameter is really annoying. Is it possible in C# 4.0 rewrite GetValueAs is such a way that the method will still be applicable to different types of string-keyed dictionaries AND there will be no need to specify second type parameter in the calling code, i.e. use var list = dictionary.GetValueAs<List<int("A"); or at least something like var list = dictionary.GetValueAs<List<int, ?("A"); instead of var list = dictionary.GetValueAs<List<int, object("A");

    Read the article

  • WCF: collection proxy type on client

    - by Unholy
    I have the following type in wsdl (it is generated by third party tool): <xsd:complexType name="IntArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:int" /> </xsd:sequence> </xsd:complexType> Sometimes Visual Studio generates: public class IntArray : System.Collections.Generic.List<int> {} And sometimes it doesn't generate any proxy type for this wsdl and just uses int[]. Collection type in Web Service configuration is System.Array. What could be the reason for such upredictable behavior? Edited: I found the way how I can reproduce this behavior. For examle we have two types: <xsd:complexType name="IntArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:int" /> </xsd:sequence> </xsd:complexType> <xsd:complexType name="StringArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:string" /> </xsd:sequence> </xsd:complexType> VS generates: public class IntArray : System.Collections.Generic.List<int> {} public class StringArray : System.Collections.Generic.List<string> {} Now I change StringArray type: <xsd:complexType name="StringArray"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="Elements" type="xsd:string" /> <xsd:any minOccurs="0" maxOccurs="unbounded" namespace="##any" processContents="lax" /> </xsd:sequence> <xsd:anyAttribute namespace="##any" processContents="lax"/> </xsd:complexType> VS generates proxy type for StringArray only. But not for IntArray.

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • VS2008 Link Error Using SafeInt3.hpp in 64bit mode.

    - by photo_tom
    I have the below code that links and runs fine in 32bit mode - #include "safeint3.hpp" typedef SafeInt<SIZE_T> SAFE_SIZE_T; SAFE_SIZE_T sizeOfCache; SAFE_SIZE_T _allocateAmt; Where safeint3.hpp is current version that can be found on Codeplex SafeInt. For those who are unaware of it, safeint is a template class that makes working with different integer types and sizes "safe". To quote channel 9 video on software - "it writes the code that you should". Which is my case. I have a class that is managing a large in-memory cache of objects (6gb) and I am very concerned about making sure that I don't have overflow/underflow issues on my pointers/sizes/other integer variables. In this use, it solves many problems. My problem is coming when moving from 32bit dev mode to 64bit production mode. When I build the app in this mode, I'm getting the following linker warnings - 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyUint64(unsigned __int64 const &,unsigned __int64 const &,unsigned __int64 *)" (?IntrinsicMultiplyUint64@@YA_NAEB_K0PEA_K@Z) already defined in ImageInRamCache.obj; second definition ignored 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyInt64(__int64 const &,__int64 const &,__int64 *)" (?IntrinsicMultiplyInt64@@YA_NAEB_J0PEA_J@Z) already defined in ImageInRamCache.obj; second definition ignored While I understand I can ignore the error, I would like either (a) prevent the warning from occurring or (b) make it disappear so that my QA department doesn't flag it as a problem. And after spending some time researching it, I cannot find a way to do either.

    Read the article

  • Reserve space for initially hidden widget in QVBoxLayout

    - by Skinniest Man
    I am using a QVBoxLayout to arrange a vertical stack of widgets. I want some of them to be initially hidden and only show up when a check box is checked. Here is an example of the code I'm using. MyWidget::MyWidget(QWidget *parent) : QWidget(parent) { QVBoxLayout *layout = new QVBoxLayout(this); QLabel *labelLogTypes = new QLabel(tr("Log Types")); m_checkBoxCsv = new QCheckBox(tr("&Delimited File (CSV)")); m_labelDelimiter = new QLabel(tr("Delimiter:")); m_lineEditDelimiter = new QLineEdit(","); checkBoxCsv_Toggled(m_checkBoxCsv-isChecked()); connect(m_checkBoxCsv, SIGNAL(toggled(bool)), SLOT(checkBoxCsv_Toggled(bool))); QHBoxLayout *layoutDelimitedChar = new QHBoxLayout(); layoutDelimitedChar-addWidget(m_labelDelimiter); layoutDelimitedChar-addWidget(m_lineEditDelimiter); m_checkBoxXml = new QCheckBox(tr("&XML File")); m_checkBoxText = new QCheckBox(tr("Plain &Text File")); // Now that everything is constructed, put it all together // in the main layout. layout-addWidget(labelLogTypes); layout-addWidget(m_checkBoxCsv); layout-addLayout(layoutDelimitedChar); layout-addWidget(m_checkBoxXml); layout-addWidget(m_checkBoxText); layout-addStretch(); } MyWidget::checkBoxCsv_Toggled(bool checked) { m_labelDelimiter-setVisible(checked); m_lineEditDelimiter-setVisible(checked); } I want m_labelDelimiter and m_lineEditDelimiter both to be initially invisible and I want their visibility to toggle with the state of m_checkBoxCsv. This code acheives the functionality I desire, but it doesn't seem to reserve space for the two initially hidden widgets. When I check the checkbox, they become visible, but everything is kind of scrunched to accomodate them. If I leave them initially visible, everything is laid out just the way I would like it. Is there any way to make the QVBoxLayout reserve space for these widgets even if they're initially invisible?

    Read the article

  • How to make a plot from summaryRprof?

    - by ThorDivDev
    This is a question for an university assignment. I was given three algorithms to calculate the GCD that I already did. My problem is getting the Rprof results to a plot so I can compare them side by side. From what little understanding I have about Rprof, summaryRprof and plot is that Rprof is used like this: Rprof() #To start #functions here Rprof(NULL) #TO end summaryRprof() # to print results I understand that plot has many different types of inputs, x and y values and something called a data frame which I assume is a fancy word for table. and to draw different lines and things I need to use this: http://www.harding.edu/fmccown/r/ what I cant figure out is how to get the summaryRprof results to the plot() function. > Rprof(filename="RProfOut2.out", interval=0.0001) > gcdBruteForce(10000, 33) [1] 1 > gcdEuclid(10000, 33) [1] 1 > gcdPrimeFact(10000, 33) [1] 1 > Rprof(NULL) > summaryRprof() ?????plot???? I have been reading on stack overflow that and other sites that I can also try to use profr and proftools although I am not very clear on the usage. The only graph I have been able to make is one using plot(system.time(gcdFunction(10,100))) As always any help is appreciated.

    Read the article

  • Entity Framework does not map 2 columns from a SqlQuery calling a stored procedure

    - by user1783530
    I'm using Code First and am trying to call a stored procedure and have it map to one of my classes. I created a stored procedure, BOMComponentChild, that returns details of a Component with information of its hierarchy in PartsPath and MyPath. I have a class for the output of this stored procedure. I'm having an issue where everything except the two columns, PartsPath and MyPath, are being mapped correctly with these two properties ending up as Nothing. I searched around and from my understanding the mapping bypasses any Entity Framework name mapping and uses column name to property name. The names are the same and I'm not sure why it is only these two columns. The last part of the stored procedure is: SELECT t.ParentID ,t.ComponentID ,c.PartNumber ,t.PartsPath ,t.MyPath ,t.Layer ,c.[Description] ,loc.LocationID ,loc.LocationName ,CASE WHEN sup.SupplierID IS NULL THEN 1 ELSE sup.SupplierID END AS SupplierID ,CASE WHEN sup.SupplierName IS NULL THEN 'Scinomix' ELSE sup.SupplierName END AS SupplierName ,c.Active ,c.QA ,c.IsAssembly ,c.IsPurchasable ,c.IsMachined ,t.QtyRequired ,t.TotalQty FROM BuildProducts t INNER JOIN [dbo].[BOMComponent] c ON c.ComponentID = t.ComponentID LEFT JOIN [dbo].[BOMSupplier] bsup ON bsup.ComponentID = t.ComponentID AND bsup.IsDefault = 1 LEFT JOIN [dbo].[LookupSupplier] sup ON sup.SupplierID = bsup.SupplierID LEFT JOIN [dbo].[LookupLocation] loc ON loc.LocationID = c.LocationID WHERE (@IsAssembly IS NULL OR IsAssembly = @IsAssembly) ORDER BY t.MyPath and the class it maps to is: Public Class BOMComponentChild Public Property ParentID As Nullable(Of Integer) Public Property ComponentID As Integer Public Property PartNumber As String Public Property MyPath As String Public Property PartsPath As String Public Property Layer As Integer Public Property Description As String Public Property LocationID As Integer Public Property LocationName As String Public Property SupplierID As Integer Public Property SupplierName As String Public Property Active As Boolean Public Property QA As Boolean Public Property IsAssembly As Boolean Public Property IsPurchasable As Boolean Public Property IsMachined As Boolean Public Property QtyRequired As Integer Public Property TotalQty As Integer Public Property Children As IDictionary(Of String, BOMComponentChild) = New Dictionary(Of String, BOMComponentChild) End Class I am trying to call it like this: Me.Database.SqlQuery(Of BOMComponentChild)("EXEC [BOMComponentChild] @ComponentID, @PathPrefix, @IsAssembly", params).ToList() When I run the stored procedure in management studio, the columns are correct and not null. I just can't figure out why these won't map as they are the important information in the stored procedure. The types for PartsPath and MyPath are varchar(50).

    Read the article

  • Simulate stochastic bipartite network based on trait values of species - in R

    - by Scott Chamberlain
    I would like to create bipartite networks in R. For example, if you have a data.frame of two types of species (that can only interact across species, not within species), and each species has a trait value (e.g., size of mouth in the predator allows who gets to eat which prey species), how do we simulate a network based on the traits of the species (that is, two species can only interact if their traits overlap in values for instance)? UPDATE: Here is a minimal example of what I am trying to do. 1) create phylogenetic tree; 2) simulate traits on the phylogeny; 3) create networks based on species trait values. # packages install.packages(c("ape","phytools")) library(ape); library(phytools) # Make phylogenetic trees tree_predator <- rcoal(10) tree_prey <- rcoal(10) # Simulate traits on each tree trait_predator <- fastBM(tree_predator) trait_prey <- fastBM(tree_prey) # Create network of predator and prey ## This is the part I can't do yet. I want to create bipartite networks, where ## predator and prey interact based on certain crriteria. For example, predator ## species A and prey species B only interact if their body size ratio is ## greater than X.

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Reflection and Operator Overloads in C#

    - by TenshiNoK
    Here's the deal. I've got a program that will load a given assembly, parse through all Types and their Members and compile a TreeView (very similar to old MSDN site) and then build HTML pages for each node in the TreeView. It basically takes a given assembly and allows the user to create their own MSDN-like library for it for documentation purposes. Here's the problem I've run into: whenever an operator overload is encounted in a defined class, reflection returns that as a "MethodInfo" with the name set to something like "op_Assign" or "op_Equality". I want to be able to capture these and list them properly, but I can't find anything in the MethodInfo object that is returned to accurately identify that I'm looking at an operator. I definitely don't want to just capture everything that starts with "op_", since that will most certainly (at some point) will pick up a method it's not supposed to. I know that other methods and properties that are "special cases" like this one have the "IsSpecialName" property set, but appearantly that's not the case with operators. I've been scouring the 'net and wracking my brain to two days trying to figure this one out, so any help will be greatly appreciated.

    Read the article

  • General web service ideas

    - by user2014175
    I have a question regarding different types of web services. I'll preface this by saying that I have built a number of apps (for both ios and android) for personal use that interact with the web via php and sql. I have taught myself these languages, and as such don't have the broader background knowledge that many of you do. My question is, in what other ways can you perform an interaction between a web service and a mobile device other than mobile - php - sql - etc. For example, If I built a very simple tracking app for my car, my current method would be to push GPS coordinates from my iphone to my database at a set interval, then I would write a simple bit of javascript that pulled those coordinates out of the database and superimposed them on a google map. Is there a different way to do this? Such as the server acting as a live middle man who simple pushed the coordinates directly to a target browser? Without the database in the middle? If so, are there advantages and disadvantages to these different methods to achieve different goals? I know its a broad question but I'm really intrigued and I'm finding it difficult to word a google search for it. Any info / reading material suggesting would be excellent. Thanks

    Read the article

  • ant conditions problem

    - by senzacionale
    I have problem with ant. I woul dlike to use conditions in ant. But i get error of: BUILD FAILED C:\Projekti\Projekt ANT\build.xml:412: Problem: failed to create task or type Cause: The name is undefined. Action: Check the spelling. Action: Check that any custom tasks/types have been declared. Action: Check that any / declarations have taken place. and this is code: <target name="test"> <input message="Write some text: " addproperty="foo" /> <if> <equals arg1="${foo}" arg2="bar" /> <then> <echo message="The value of property foo is 'bar'" /> </then> <elseif> <equals arg1="${foo}" arg2="foo" /> <then> <echo message="The value of property foo is 'foo'" /> </then> </elseif> <else> <echo message="The value of property foo is not 'foo' or 'bar'" /> </else> </if> </target>

    Read the article

  • In need of a Smarter Environmental Package Configuration

    - by Jeremy Liberman
    I am trying to set up a package template in SSIS, following the Wrox Programmer to Programmer book, SQL Server 2008 Integration Services: Problem - Design - Solution. I'm really liking this book even though it is 2008 and we're using SQL Server 2005. I've got a working package template that uses an Indirect XML package configuration to identify what environment (local developer, dev, QA, production, etc) the package is being run in. That locates the SQL Server package configuration for the environment. That set-up is great and all except for the environment variable at the very front of it all. My team would prefer it if the package could use the same environment resource locator as all our other applications and tools use, so we don't two environment markers with essentially the same information in them. Normally we look up a registry key in HKey_Local_Machine but the Registry Package Configuration type only lets you look up the HKey_Current_User registries. My first thought was to write a new Package Configuration Type class that extends the Registry type; after all we'd had such luck writing our own custom log provider. SSIS is super extendable, right? So there doesn't seem to be a way to write your own Package Configuration Types. Is there still some way I can configure my SSIS SQL Server package configuration from a HKLM registry key connection string? If this is not possible, what other workarounds are available? My idea is to write a PowerShell script that will create/modify the Environment Variable that the package will use by fetching the connection string from the registry. This way there's still two markers, but at least then it's automatically maintained and automated. Is this kind of workaround necessary? Thank you for your time.

    Read the article

  • byte + byte = int... why?

    - by Robert C. Cartaino
    Looking at this C# code... byte x = 1; byte y = 2; byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte' The result of any math performed on byte (or short) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte, so... byte z = (byte)(x + y); // works What I am wondering is why? Is it architectural? Philosophical? We have: int + int = int long + long = long float + float = float double + double = double So why not: byte + byte = byte short + short = short ? A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.

    Read the article

  • Creating serializeable unique compile-time identifiers for arbitrary UDT's.

    - by Endiannes
    I would like a generic way to create unique compile-time identifiers for any C++ user defined types. for example: unique_id<my_type>::value == 0 // true unique_id<other_type>::value == 1 // true I've managed to implement something like this using preprocessor meta programming, the problem is, serialization is not consistent. For instance if the class template unique_id is instantiated with other_type first, then any serialization in previous revisions of my program will be invalidated. I've searched for solutions to this problem, and found several ways to implement this with non-consistent serialization if the unique values are compile-time constants. If RTTI or similar methods, like boost::sp_typeinfo are used, then the unique values are obviously not compile-time constants and extra overhead is present. An ad-hoc solution to this problem would be, instantiating all of the unique_id's in a separate header in the correct order, but this causes additional maintenance and boilerplate code, which is not different than using an enum unique_id{my_type, other_type};. A good solution to this problem would be using user-defined literals, unfortunately, as far as I know, no compiler supports them at this moment. The syntax would be 'my_type'_id; 'other_type'_id; with udl's. I'm hoping somebody knows a trick that allows implementing serialize-able unique identifiers in C++ with the current standard (C++03/C++0x), I would be happy if it works with the latest stable MSVC and GNU-G++ compilers, although I expect if there is a solution, it's not portable.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >