Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 161/701 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • SonicAgile Now with Dropbox Integration

    - by Stephen.Walther
    SonicAgile, our free Agile Project Management Tool, now integrates with Dropbox. You can upload files such as logos, videos, and documentation, and associate the files with stories and epics. Before you can take advantage of this new feature, you need to get a Dropbox account. You can get a free Dropbox account that contains up to 2 Gigabytes of data. See the pricing here: https://www.dropbox.com/pricing Connecting with Dropbox You only need to connect your SonicAgile project to Dropbox once. Follow these steps: Login/Register at http://SonicAgile.com Click the Settings link to navigate to Project Settings. Select the Files tab (the last tab). Click the connect link to connect to Dropbox. After you complete these steps, a new folder is created in your Dropbox at Apps\SonicAgile. All of your SonicAgile files are stored here. Uploading Files to SonicAgile After your SonicAgile project is connected to Dropbox, a new Files tab appears for every story. You can upload files under the Files tab by clicking the upload file link. When files are uploaded, the files are stored on your Dropbox under the Apps\SonicAgile folder. Be aware that anyone who is a member of your project – all of your team members – can upload, delete, and view any Dropbox files associated with any story in your project. Everyone in your project should have access to all of the information needed to complete the project successfully.  This is the Agile way of doing things. Summary I hope you like the new Dropbox integration! I think you’ll find that it is really useful to be able to attach files to your work items. Use the comments section below to let me know what you think.

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • OBJECT_Name parameters and dbid

    - by steveh99999
    If you've been using SQL Server for a long time, you may have been used to using the OBJECT_NAME system function in the past - especially useful when converting table IDs into table names when querying sysobjects and sysindexes..... However, if you're an old-school DBA  - did you know since SQL 2005 service pack 2 it  accepts a  second parameter ? database_id.. For example, this can be used to summarize some useful information from sys.dm_exec_query_stats. When reviewing SQL Server performance - it can be useful to look at the most heavily used stored procedures rather than inefficient less frequently used procedures.  Here's a query to summarize performance data on the most-heavily used stored procedures across all databases on a server  :-SELECT TOP 20 DENSE_RANK() OVER (ORDER BY SUM(execution_count) DESC) AS rank, OBJECT_NAME(qt.objectid, qt.dbid) AS 'proc name', (CASE WHEN qt.dbid = 32767 THEN 'mssqlresource' ELSE DB_NAME(qt.dbid) END ) AS 'Database', OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid) AS 'schema', SUM(execution_count) AS 'TotalExecutions',SUM(total_worker_time) AS 'TotalCPUTimeMS', SUM(total_elapsed_time) AS 'TotalRunTimeMS', SUM(total_logical_reads) AS 'TotalLogicalReads',SUM(total_logical_writes) AS 'TotalLogicalWrites', MIN(creation_time) AS 'earliestPlan', MAX(last_execution_time) AS 'lastExecutionTime' FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE OBJECT_NAME(qt.objectid, qt.dbid) IS NOT NULL GROUP BY OBJECT_NAME(qt.objectid, qt.dbid),qt.dbid,OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid)      

    Read the article

  • SQLAuthority News – #SQLPASS 2012 Book Signing Photos

    - by pinaldave
    I am at SQLPASS 2012 and the event is going great. Here are few of the random photos and random news. We had participated in three different book signing event today. SQL Queries 2012 Joes 2 Pros Book 1 Launch and Book Signing SQL 2012 Functions Book Launch at Embarcadero SQL Backup and Recovery Book Launch at Idera Rick Morelan and I authored the first two books 1) SQL 2012 Functions and 2) SQL Queries 2012 Joes 2 Pros Volume 1. Our dear friend Tim Randney authored SQL Backup and Recovery Book. In the book signing event of Tim Radney I went ahead of the time and stood in the line. I was fortunate to receive the very first copy of the autographed book from Tim Radney. We have one more book signing event of the book SQL Backup and Recovery by Tim Randey on Friday 9, 2012 between 12 to 1 PM at Joes 2 Pros booth #117. This is your last chance to shake hands with us and meet us in person. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • How to write a very basic compiler [closed]

    - by Ali
    Possible Duplicate: Best Online resources to learn about Compilers? What would be the best way to learn about compilers, and executable formats? Advanced compilers like gcc compile codes into machine readable files according to the language in which the code has been written (e.g. C, C++, etc). In fact, they interpret the meaning of each codes according to library and functions of the corresponding languages. Correct me if I'm wrong. I wish to better understand compilers by writing a very basic compiler (probably in C) to compile a static file (e.g. Hello World in a text file). I tried some tutorials and books, but all of them are for practical cases. They deal with compiling dynamic codes with meanings connected with the corresponding language. How can I write a basic compiler to convert a static text into a machine readable file? The next step will be introducing variables into the compiler; imagine that we want to write a compiler which compile only some functions of a language. Introducing practical tutorials and resources is highly appreciated :-)

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • Haskell: Best tools to validate textual input?

    - by Ana
    In Haskell, there are a few different options to "parsing text". I know of Alex & Happy, Parsec and Attoparsec. Probably some others. I'd like to put together a library where the user can input pieces of a URL (scheme e.g. HTTP, hostname, username, port, path, query, etc.) I'd like to validate the pieces according to the ABNF specified in RFC 3986. In other words, I'd like to put together a set of functions such as: validateScheme :: String -> Bool validateUsername :: String -> Bool validatePassword :: String -> Bool validateAuthority :: String -> Bool validatePath :: String -> Bool validateQuery :: String -> Bool What is the most appropriate tool to use to write these functions? Alex's regexps is very concise, but it's a tokenizer and doesn't straightforwardly allow you to parse using specific rules, so it's not quite what I'm looking for, but perhaps it can be wrangled into doing this easily. I've written Parsec code that does some of the above, but it looks very different from the original ABNF and unnecessarily long. So, there must be an easier and/or more appropriate way. Recommendations?

    Read the article

  • C IDE for Mac needed

    - by StasM
    I'm looking for heavy-duty C/C++ IDE for Mac that would satisfy the following criteria: Work with big projects (~5000 files, some of them 100K big) efficiently. Have good navigation both file-based and symbol-based - i.e. "go to file", "go to function" etc. with autocompletion support. Support for "go to declaration/definition" for symbols - functions, structures, etc. Auto-adding new files in folders already in the project. Support for code completion for values, function names, etc. At least rudimentary CPP macro understanding - i.e. #define foo bar then foo() should take me either to #define or to actual bar. I understand full CPP parsing may be hard, but I hope for at least the obvious cases. Support for displaying parameter names/types by function name, preferably - integrated with the previous item, for functions defined in the project. Support for libc would be nice too :) (optional) Cross-project search support (I can manage with grep -r if everything else works) (optional) SVN support, at least to some extent (update, commit, mark updated) Is there such editor around? Free would be nice, but I'm ready to part with some money if it's good enough. I'm using TextMate now but I'm not satisfied with it. Tried Xcode but it seems to not be able to handle a large project - it just crashed...

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • game multiplayer service development

    - by nomad
    I'm currently working on a multiplayer game. I've looked at a number of multiplayer services(player.io, playphone, gamespy, and others) but nothing really hits the mark. They are missing features, lack platform support or cost too much. What I'm looking for is a simple poor man's version of steam or xbox live. Not the game marketplace side of those two but the multiplayer services. User accounts, profiles, presence info, friends, game stats, invites, on/offline messaging. Basically I'm looking for a unified multiplayer platform for all my games across devices. Since I can't find what I'm planning to roll my own piece by piece. I plan to save on server resources by making most of the communication p2p. Things like game data and voice chat can be handled between peers and the server keeps track of user presence and only send updates when needed or requested. I know this runs the risk of cheating but that isn't a concern right now. I plan to run this on a Amazon ec2 micro server for development then move to a small to large instance when finished. I figure user accounts would be the simplest to start with. Users can create accounts online or using in game dialog, login/out, change profile info. The user can access this info online or in game. I will need user authentication and secure communication between server and client. I figure all info will be stored in a database but I dont know how it can be stored securely and accessed from webserver and game services. I would appreciate and links to tutorials, info or advice anyone could provide to get me started. Any programming language is fine but I plan to use c# on the server and c/c++ on devices. I would like to get started right away but I'm in no hurry to get it finished just yet. If you know of a service that already fits my requirements please let me know.

    Read the article

  • How to use shared_ptr for COM interface pointers

    - by Seefer
    I've been reading about various usage advice relating to the new c++ standard smart pointers unique_ptr, shared_ptr and weak_ptr and generally 'grok' what they are about when I'm writing my own code that declares and consumes them. However, all the discussions I've read seem restricted to this simple usage situation where the programmer is using smart in his/her own code, with no real discussion on techniques when having to work with libraries that expect raw pointers or other types of 'smart pointers' such as COM interface pointers. Specifically I'm learning my way through C++ by attempting to get a standard Win32 real-time game loop up and running that uses Direct2D & DirectWrite to render text to the display showing frames per second. My first task with Direct2D is in creating a Direct2D Factory object with the following code from the Direct2D examples on MSDN: ID2D1Factory* pD2DFactory = nullptr; HRESULT hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); pD2DFactory is obviously an 'out' parameter and it's here where I become uncertain how to make use of smart pointers in this context, if indeed it's possible. My inexperienced C++ mind tells me I have two problems: With pD2DFactory being a COM interface pointer type, how would smart_ptr work with the Add() / Release() member functions for a COM object instance? Are smart pointers able to be passed to functions in situations where the function is using an 'out' pointer parameter technique? I did experiment with the alternative of using _com_ptr_t in the comip.h header file to help with pointer lifetime management and declared the pD2DFactory pointer with the following code: _com_ptr_t<_com_IIID<pD2DFactory, &__uuidof(pD2DFactory)>> pD2DFactory = nullptr; and it appears to work so far but, as you can see, the syntax is cumbersome :) So, I was wondering if any C++ gurus here could confirm whether smart pointers are able to help in cases like this and provide examples of usage, or point me to more in-depth discussions of smart pointer usage when needing to work with other code libraries that know nothing of them. Or is it simply a case of my trying to use the wrong tool for the job? :)

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • Is LINQ to objects a collection of combinators?

    - by Jimmy Hoffa
    I was just trying to explain the usefulness of combinators to a colleague and I told him LINQ to objects are like combinators as they exhibit the same value, the ability to combine small pieces to create a single large piece. Though I don't know that I can call LINQ to objects combinators. I've seen 2 levels of definition for combinator that I generalize as such: A combinator is a function which only uses things passed to it A combinator is a function which only uses things passed to it and other standard atomic functions but not state The first is very rigid and can be seen in the combinatory calculus systems and in haskell things like $ and . and various similar functions meet this rule. The second is less rigid and would allow something like sum as it uses the + function which was not passed in but is standard and not stateful. Though the LINQ extensions in C# use state in their iteration models, so I feel I can't say they're combinators. Can someone who understands the definition of a combinator more thoroughly and with more experience in these realms give a distinct ruling on this? Are my definitions of 'combinator' wrong to begin with?

    Read the article

  • What are some of the benefits of a "Micro-ORM"?

    - by Wayne M
    I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't have to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).

    Read the article

  • How can I improve my error checking and handling?

    - by Google
    Lately I have been struggling to understand what the right amount of checking is and what the proper methods are. I have a few questions regarding this: What is the proper way to check for errors (bad input, bad states, etc)? Is it better to explicitly check for errors, or use functions like asserts which can be optimized out of your final code? I feel like explicitly checking clutters a program with a lot of extra code which shouldn't be executed in most situations anyway-- and not to mention most errors end up with an abort/exit failure. Why clutter a function with explicit checks just to abort? I have looked for asserts versus explicit checking of errors and found little to truly explain when to do either. Most say 'use asserts to check for logic errors and use explicit checks to check for other failures.' This doesn't seem to get us very far though. Would we say this is feasible: Malloc returning null, check explictly API user inserting odd input for functions, use asserts Would this make me any better at error checking? What else can I do? I really want to improve and write better, 'professional' code.

    Read the article

  • PHP: A config file in .ini or .php format?

    - by Tim Visee
    I'm working on a huge CMS system, and I asked myself what configuration format I should use. There are two common formats for configuration files. The first one is an INI file, containg all the configuration properties. Then you can simply parse this INI file using build in PHP functions. A second option is to use a PHP file containing a regular PHP array with these configuration properties. Now, it's easier to edit an INI file, but a PHP file give's you more options, for example it allows you to add a function which retrieves one of the configuration options while reading the configuration file. Note: The PHP configuration file would only contain an array of configuration, no initialization functions or anything. (This is possible of course, but it's not implemented by default) Now, what is recommend for me to use as configuration file? What is the most common format for a configuration file? Should I go for simplicity with the INI files, or with a more dynamic one using PHP. One thing to note, this is not for personal usage. I'm planning to release the CMS system soon, and a lot of websites are scheduled already to change to my CMS system.

    Read the article

  • Explanation on how "Tell, Don't Ask" is considered good OO

    - by Pubby
    This blogpost was posted on Hacker News with several upvotes. Coming from C++, most of these examples seem to go against what I've been taught. Such as example #2: Bad: def check_for_overheating(system_monitor) if system_monitor.temperature > 100 system_monitor.sound_alarms end end versus good: system_monitor.check_for_overheating class SystemMonitor def check_for_overheating if temperature > 100 sound_alarms end end end The advice in C++ is that you should prefer free functions instead of member functions as they increase encapsulation. Both of these are identical semantically, so why prefer the choice that has access to more state? Example 4: Bad: def street_name(user) if user.address user.address.street_name else 'No street name on file' end end versus good: def street_name(user) user.address.street_name end class User def address @address || NullAddress.new end end class NullAddress def street_name 'No street name on file' end end Why is it the responsibility of User to format an unrelated error string? What if I want to do something besides print 'No street name on file' if it has no street? What if the street is named the same thing? Could someone enlighten me on the "Tell, Don't Ask" advantages and rationale? I am not looking for which is better, but instead trying to understand the author's viewpoint.

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • New way of integrating Openfeint in Cocos2d-x 0.12.0

    - by Ef Es
    I am trying to implement OpenFeint for Android in my cocos2d-x project. My approach so far has been creating a button that calls a static java method in class Bridge using jnihelper functions (jnihelper only accepts statics). Bridge has one singleton attribute of type OFAndroid, that is the class dynamically calling the Openfeint Api methods, and every method in the bridge just forwards it to the OFAndroid object. What I am trying to do now is to initialize the openfeint libraries in the main java class that is the one calling the static C++ libraries. My problem right now is that the initializing function void com.openfeint.api.OpenFeint.initialize(Context ctx, OpenFeintSettings settings, OpenFeintDelegate delegate) is not accepting the context parameter that I am giving him, which is a "this" reference to the main class. Main class extends from Cocos2dxActivity but I don't have any other that extends from Application. Any suggestions on fixing it or how to improve the architecture? EDIT: I am trying a new solution. Make the bridge class into an Application child, is called from Main object, initializes OpenFeint when created and it can call the OpenFeint functions instead of needing an additional class. The problem is I still get the error. 03-30 14:39:22.661: E/AndroidRuntime(9029): Caused by: java.lang.NullPointerException 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.content.ContextWrapper.getPackageManager(ContextWrapper.java:85) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.validateManifest(OpenFeintInternal.java:885) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initializeWithoutLoggingIn(OpenFeintInternal.java:829) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initialize(OpenFeintInternal.java:852) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.api.OpenFeint.initialize(OpenFeint.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.NuroFeint.onCreate(NuroFeint.java:23) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.FastFish.onCreate(FastFish.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1069) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2751)

    Read the article

  • Gödel, Escher, Bach - Gödel's string

    - by Brad Urani
    In the book Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter, the author gives us a representation of the precursor to Gödel's string (Gödel's string's uncle) as: ~Ea,a': (I don't have the book in front of me but I think that's right). All that remains to get Gödel's string is to plug the Gödel number for this string into the free variable a''. What I don't understand is how to get the Gödel number for the functions PROOF-PAIR and ARITHMOQUINE. I understand how to write these functions in a programming language like FlooP (from the book) and could even write them myself in C# or Java, but the scheme that Hofstadter defines for Gödel numbering only applies to TNT (which is just his own syntax for natural number theory) and I don't see any way to write a procedure in TNT since it doesn't have any loops, variable assignments etc. Am I missing the point? Perhaps Gödel's string is not something that can actually be printed, but rather a theoretical string that need not actually be defined? I thought it would be neat to write a computer program that actually prints Gödel's string, or Gödel's string encoded by Gödel numbering (yes, I realize it would have a gazillion digits) but it seems like doing so requires some kind of procedural language and a Gödel numbering system for that procedural language that isn't included in the book. Of course once you had that, you could write a program that plugs random numbers into variable "a" and run procedure PROOF-PAIR on it to test for theoromhood of Gödel's string. If you let it run for a trillion years you might find a derivation that proves Gödel's string.

    Read the article

  • C# vector class - Interpolation design decision

    - by Benjamin
    Currently I'm working on a vector class in C# and now I'm coming to the point, where I've to figure out, how i want to implement the functions for interpolation between two vectors. At first I came up with implementing the functions directly into the vector class... public class Vector3D { public static Vector3D LinearInterpolate(Vector3D vector1, Vector3D vector2, double factor) { ... } public Vector3D LinearInterpolate(Vector3D other, double factor { ... } } (I always offer both: a static method with two vectors as parameters and one non-static, with only one vector as parameter) ...but then I got the idea to use extension methods (defined in a seperate class called "Interpolation" for example), since interpolation isn't really a thing only available for vectors. So this could be another solution: public class Vector3D { ... } public static class Interpolation { public static Vector3D LinearInterpolate(this Vector3D vector, Vector3D other, double factor) { ... } } So here an example how you'd use the different possibilities: { var vec1 = new Vector3D(5, 3, 1); var vec2 = new Vector3D(4, 2, 0); Vector3D vec3; vec3 = vec1.LinearInterpolate(vec2, 0.5); //1 vec3 = Vector3D.LinearInterpolate(vec1, vec2, 0.5); //2 //or with extension-methods vec3 = vec1.LinearInterpolate(vec2, 0.5); //3 (same as 1) vec3 = Interpolation.LinearInterpolation(vec1, vec2, 0.5); //4 } So I really don't know which design is better. Also I don't know if there's an ultimate rule for things like this or if it's just about what someone personally prefers. But I really would like to hear your opinions, what's better (and if possible why ).

    Read the article

  • Is creating a separate pool for each individual png image in the same class appropriate?

    - by Panzercrisis
    I'm still possibly a little green about object-pooling, and I want to make sure something like this is a sound design pattern before really embarking upon it. Take the following code (which uses the Starling framework in ActionScript 3): [Embed(source = "/../assets/images/game/misc/red_door.png")] private const RED_DOOR:Class; private const RED_DOOR_TEXTURE:Texture = Texture.fromBitmap(new RED_DOOR()); private const m_vRedDoorPool:Vector.<Image> = new Vector.<Image>(50, true); . . . public function produceRedDoor():Image { // get a Red Door image } public function retireRedDoor(pImage:Image):void { // retire a Red Door Image } Except that there are four colors: red, green, blue, and yellow. So now we have a separate pool for each color, a separate produce function for each color, and a separate retire function for each color. Additionally there are several items in the game that follow this 4-color pattern, so for each of them, we have four pools, four produce functions, and four retire functions. There are more colors involved in the images themselves than just their predominant one, so trying to throw all the doors, for instance, in a single pool, and then changing their color properties around isn't going to work. Also the nonexistence of the static keyword is due to its slowness in AS3. Is this the right way to do things?

    Read the article

  • MDX needs a function or macro syntax

    - by Darren Gosbell
    I was having an interesting discussion with a few people about the impact of named sets on performance (the same discussion noted by Chris Webb here: http://cwebbbi.wordpress.com/2011/03/16/referencing-named-sets-in-calculations). And apparently the core of the performance issue comes down to the way named sets are materialized within the SSAS engine. Which lead me to the thought that what we really need is a syntax for declaring a non-materialized set or to take this even further a way of declaring an MDX expression as function or macro so that it can be re-used in multiple places. Because sometimes you do want the set materialised, such as when you use an ordered set for calculating rankings. But a lot of the time we just want to make our MDX modular and want to avoid having to repeat the same code over and over. I did some searches on connect and could not find any similar suggestions so I posted one here: https://connect.microsoft.com/SQLServer/feedback/details/651646/mdx-macro-or-function-syntax Although apparently I did not search quite hard enough as Chris Webb made a similar suggestion some time ago, although he also included a request for true MDX stored procedures (not the .Net style stored procs that we have at the moment): https://connect.microsoft.com/SQLServer/feedback/details/473694/create-parameterised-queries-and-functions-on-the-server Chris also pointed out this post that he did last year http://cwebbbi.wordpress.com/2010/09/13/iccube/ where he pointed out that the icCube product already has this sort of functionality. So if you think either or both of these suggestions is a good idea then I would encourage you to click on the links and vote for them.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >