Search Results

Search found 19018 results on 761 pages for 'raw input'.

Page 374/761 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Advice on SCRUM for the solitary developer [closed]

    - by ProfK
    Possible Duplicate: Agile for the Solo Developer I am looking for advice on the SCRUM process for a solitary developer. Most SCRUM resources I see focus on its use in a team environment, hence my question here. I'd like some guidance on structuring and managing my projects for SCRUM, with me as a solitary developer and business owner, but still occasionally including my clients for input and feedback. Areas I'm not clear on include resolving my backlog into 'sprintable' project areas and stories, defining user stories properly with a view to being digested by developer level users, defining feasible sprints for a single developer etc. Essentially I'm looking for advice on moving from using scrum in a team/office environment, with colleagues and project manager, and using chaos/cowboy-coding on my own, to assuming the role of PM myself and adopting scrum for work on my own. Any advice is welcome.

    Read the article

  • Before and After the Programmer Life [closed]

    - by Gio Borje
    How were people before and after becoming programmers through a black box? In a black box, the implementation is irrelevant; therefore, we focus on the input and output: High School Nerd -> becomeProgrammer() -> Manager (Person Before) -> (Person as Programmer) -> (New Person) (Person Before) -> (Person as Programmer) How are the typical lives of people before they became programmers? Why do people pursue life as a programmer? (Person as Programmer) -> (New Person) How has becoming a programmer changed people afterwards? Why quit being a programmer? Anecdotes would be nice. If many programmers have similar backgrounds and fates, do you think that there is some sort of stereotypical person that are destined to become programmers?

    Read the article

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • How can I prevent [flush-8:16] and [jbd2/sdb2-8] from causing GUI unresponsiveness?

    - by ændrük
    Approximately twice a week, the entire graphical interface will lock up for about 10-20 seconds without warning while I am doing simple tasks such as browsing the web or writing a paper. When this happens, GUI elements do not respond to mouse or keyboard input, and the System Monitor applet displays 100% IOWait processor usage. Today, I finally happened to have GNOME Terminal already open when the problem started. Despite other applications such as Google Chrome, Firefox, GNOME Do, and GNOME Panel being unresponsive, the terminal was usable. I ran iotop and observed that commands named [flush-8:16] and [jbd2/sdb2-8] were alternately using 99.99% IO. What are these, and how can I prevent them from causing GUI unresponsiveness? Details $ mount | grep ^/dev /dev/sda1 on / type ext4 (rw,noatime,discard,errors=remount-ro,commit=0) /dev/sdb2 on /home type ext4 (rw,commit=0) /dev/sda is an OCZ-VERTEX2 and /dev/sdb is a WD10EARS. Here is dumpe2fs /dev/sdb2, if it's relevant.

    Read the article

  • Dangerous programming

    - by benhowdle89
    Ok, i'm talking pure software/web, i'm not on about code to power Life Support machines or NASA rockets. In terms of software/web development what is the most dangerous single piece of code someone could put into a program (say if they had a grudge against a client/employee) In PHP, the first thing that comes to mind is some sort of file deletion: function EmptyDir($dir) { $handle=opendir($dir); while (($file = readdir($handle))!==false) { echo "$file <br>"; @unlink($dir.'/'.$file); } closedir($handle); } EmptyDir('images'); Or a PHP script that takes a user's sensitive input and posts it to Google sitemap or something? I hope this doesnt get closed off as subjective as there surely must be a ranking order of dangerous code. So i'm asking for the No.1 spot :) DISCLAIMER: I have no grudges against anyone, just curious for the answer!

    Read the article

  • Apple Aluminum Keyboard Via Bluetooth - Fn Key Problem

    - by Richard
    I'm connecting an Apple Bluetooth Aluminum keyboard (this one) to my Lubuntu setup using the blueman applet. The keyboard types fine, but I would like to use its fn key to change screen brightness changing, page-up (fn+ctrl+down), page-down (fn+ctrl+up), et cetera. Right now the fn key doesn't seem to work. When I use xev, I don't see anything happen when I press fn. Does the keyboard not send this to the computer at all? Do I need to configure blueman's "Input Service" setting to make this an Apple (rather than a generic) keyboard? (It's not obvious how to do this.) Is xev just not showing the fn key? Where in this stack of software do I need to make a change to achieve the desired behaviour? Thanks!

    Read the article

  • Is there an alternative to the default lock-screen?

    - by cheshirekow
    I have a tablet PC that I'm running oneric on. One thing I don't like is that the default lock screen requires you to enter a password. Since this is a tablet, this means that it either needs to be connected to a keyboard, or I have to figure out how to get onboard to show up for the lock screen (caribou wont even start, and gok is messed up too). Even if I could do that, anyway, it's not what I want, since locking the screen is mostly to prevent erroneous input when I'm carrying it around (not for security). Are there any alternative applications for the lock screen? Anything like the plethora of android lock screens that allow you to solve puzzles or push a particular widget in a particular way to unlock the screen?

    Read the article

  • Turn-based Strategy Loop

    - by Djentleman
    I'm working on a strategy game. It's turn-based and card-based (think Dominion-style), done in a client, with eventual AI in the works. I've already implemented almost all of the game logic (methods for calculations and suchlike) and I'm starting to work on the actual game loop. What is the "best" way to implement a game loop in such a game? Should I use a simple "while gameActive" loop that keeps running until gameActive is False, with sections that wait for player input? Or should it be managed through the UI with player actions determining what happens and when? Any help is appreciated. I'm doing it in Python (for now at least) to get my Python skills up a bit, although the language shouldn't matter for this question.

    Read the article

  • Will I have an easier time learning OpenGL in Pygame or Pyglet? (NeHe tutorials downloaded)

    - by shadowprotocol
    I'm looking between PyGame and Pyglet, Pyglet seems to be somewhat newer and more Pythony, but it's last release according to Wikipedia is January '10. PyGame seems to have more documentation, more recent updates, and more published books/tutorials on the web for learning. I downloaded both the Pyglet and PyGame versions of the NeHe OpenGL tutorials (Lessons 1-10) which cover this material: lesson01 - Setting up the window lesson02 - Polygons lesson03 - Adding color lesson04 - Rotation lesson05 - 3D lesson06 - Textures lesson07 - Filters, Lighting, input lesson08 - Blending (transparency) lesson09 - 2D Sprites in 3D lesson10 - Moving in a 3D world What do you guys think? Is my hunch that I'll be better off working with PyGame somewhat warranted?

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • "Optimal" game loop for 2D side-scroller

    - by MrDatabase
    Is it possible to describe an "optimal" (in terms of performance) layout for a 2D side-scroller's game loop? In this context the "game loop" takes user input, updates the states of game objects and draws the game objects. For example having a GameObject base class with a deep inheritance hierarchy could be good for maintenance... you can do something like the following: foreach(GameObject g in gameObjects) g.update(); However I think this approach can create performance issues. On the other hand all game objects' data and functions could be global. Which would be a maintenance headache but might be closer to an optimally performing game loop. Any thoughts? I'm interested in practical applications of near optimal game loop structure... even if I get a maintenance headache in exchange for great performance.

    Read the article

  • Webcast - Social BPM: Integrating Enterprise 2.0 with Business Applications

    - by peggy.chen
    In today's fast-paced marketplace, successful companies rely on agile business processes and collaborative work environments to stay ahead of the competition. By making your application-based business processes visible, shareable, and flexible through dynamic, process-aware user interfaces, you can ensure that your team's best ideas are heard-and implemented quickly. Join us for this complimentary live Webcast and learn how Oracle's business process management (BPM) solution with integrated Enterprise 2.0 capabilities will enable your team to: Embed ad hoc collaboration into your structured processes and gain a unified view of enterprise information-across business functions-for effective and efficient decision-making Reach out to an expanded network for expert input in resolving exceptions in business workflows Add social feedback loops to your enterprise applications and continuously improve business processes Join us for this LIVE Webcast tomorrow as we discuss how business process management with integrated Enterprise 2.0 collaboration improves business responsiveness and enhances overall enterprise productivity. Take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Register for the webcast now!

    Read the article

  • OpenJDK DIO Project Now Live! Java SE Embedded API Accessing Peripherals

    - by hinkmond
    The DIO project on OpenJDK is now live! For those who grew up in the 1970's and 1980's, you might remember Ronnie James Dio, lead singer of Black Sabbath after Ozzy was fired, and lead singer of his own band, Dio. Well, this DIO is not that Dio. This DIO is the OpenJDK Device I/O project which provides a Java-level API for accessing generic device peripherals on embedded devices, like your Raspberry Pi running Java SE Embedded software. See: OpenJDK DIO Project Here's a quote: + General Purpose Input/Output (GPIO) + Inter-Integrated Circuit Bus (I2C) + Universal Asynchronous Receiver/Transmitter (UART) + Serial Peripheral Interface If you're familiar with Pi4J, then you're going to like DIO. And, if you liked Ozzy, you probably liked Ronnie James Dio. This will probably make Robert Savage happy too. The part about DIO being live now, not the part about Dio replacing Ozzy, because everyone likes Ozzy. Hinkmond

    Read the article

  • Skype, green screen, no microphone

    - by EddyThe B
    I have issues with skype using my brand new System 76 Galago Ultrapro, running Ubuntu 13.04. I installed Skype through ubuntu software centre (after allowing Canonical partner stuff), but it won't work, the video is a green screen, and it won't connect to the microphone. The webcam works when using Cheese, and the microphone appears to work in general (it shows sound levels when I go to the Input tab under the Sound settings). I tried to fix the green screen issue using this command: $ echo -e '#!bin/bash n LD_PRELOAD=/usr/lib/i386-linux-gnu/libv4l/v4l1compat.so /usr/bin/skype' | sudo tee /usr/local/bin/skype sudo chmod a+x /usr/local/bin/skype as suggested here: http://debianhelp.wordpress.com/2012/09/28/to-do-list-after-installing-ubuntu-13-04-aka-raring-ringtail-operating-system/ but no luck. Any ideas? I have also asked this question to the System 76 tech support folk.

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Collision Detection Game Design and Architecture

    - by Chompas
    I've reading some articles about collision detection. My question here is about ideas on the design for it. Baically I have a C++ game that has a main loop with entities with an update method. Based on keyboard input, these characters updates their positions. My question is not about how to detect collisions, it's about getting ideas in which is the best way to implement this. The game has a main character but also enemies that have to collide between them, so I'm not sure where to make all the iterations for checking collisions and if the right way is to check everything against everything. Thanks in advance.

    Read the article

  • Not able to add html tags through jquery in django [closed]

    - by user1665581
    I am trying to add html tags dynamically through jquery in django. $("#div1").append("<h3> Hey !! </h3>"); $("#div1").append("<br/>"); But they are not working. However normal text is getting appended properly like $("#div1").append("Hey i am here"); I even noticed that some of the tags wern't working outside script like <br> so i had to replace it with <br/> also had to apply closing tag for input and also &nbsp is not working. what is wrong???

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • What about SEO in one page website with ajax loaded content?

    - by Azimbadwewe
    As my title I'd like to build a website with just one input text for searching restaurants and I would like to load via ajax in the same page the resultst in a list. After the list is loaded if you click on one row for Restaurant details it load via ajax all the Restaurant details. what about SEO in a website structure like this? There is a way to index every single restaurant? I'm pretty new in SEO and every comment will be for sure important to me in order to understand and learn more about it. Cheers

    Read the article

  • Dropped connections between Linux Servers in Data Center

    - by Emil H
    I have a number of linux servers at a us-based datacenter. The servers were installed by the hosting company, and are running fedora core. We're experiencing problems with dropped connections. The issue seems to be that when we attempt to connect to one of the other servers after a period of inactivity, the first connection attempt will fail, and sometimes the second. However, after that the connection succeds and it works for a while. This happens for both mysql connections and raw socket connections, but only seems to occur when connecting to some of our servers. The confusing part is that it some of the servers for which we see different behaviors have identical hardware configuration and software. For example, it happens when connecting to a server called mysql2, but not for a server called mysql3. These servers were installed at the same time, but the same specifications. The problem can be reproduced somewhat reliably, but only after waiting fifteen minutes to half an hour. This makes it hard to diagnose, and even harder since I'm not really sure what to look for. I realize that connections sometimes failed, and that we should write our applications to compensate for this but these servers all in the same data center. Why would it matter if two servers haven't communicated for a while? Does anybody have an idea what might be causing this? Is it a server configuration problem or a network problem that I should contact the hosting company about. What do I tell them to look for? Unfortunately our experience has been that the support staff doesn't investigate problems in depth unless we give them detailed directions.

    Read the article

  • How does a website like Mathway work?

    - by Bob
    I recently found a website called Mathway Basically, it works by allowing you to choose your "level of math" (which it uses to determine what tools it should provide to you) and then allows you to input a math problem which it then solves for you, and gives you detailed solutions (you have to try it, it's really cool). I was wondering how it worked on two levels. First off, how would they parse the math problem (and all the sometimes foreign mathematical operators)? How do they get from text to numbers, variables, and operators? Second, how do they generate the explanations? While you have to pay for the detailed solutions (which are explanations of how they solved the problem), I've seen their preview screenshots, and it looks very detailed. The explanations are given in full, accurate sentences. How would they generate something like that?

    Read the article

  • OAGi Architecture Council OAGIS Ten Work Group Completes first round review of Concepts for OAGIS Te

    - by michael.rowell
    Today the OAGi Architecture Council OAGIS Ten Work group completed the first level review of concepts for existing content for OAGIS Ten. This is one of the first milestones for OAGIS Ten. In doing this the concepts of key objects (the Nouns) have been identified along with the key context for their use. While OAGIS Ten remains a work-in-process the work group shows progress. Going forward the other councils will provide additional input to these and there own concepts and the contexts for each. Additionally, sub groups will focus on concepts for given domains. Stay tuned for future progress. If anyone is interested in joining the effort. OAGi membership is open to anyone, please see the OAGi Web site.

    Read the article

  • What is the best way to learn how to develop secure applications

    - by Kenneth
    I would like to get into computer security in my career. What are the best ways to learn how to program securely? It seems to me that besides textbooks and taking classes in the subject that perhaps learning how to "hack" would be one of the best ways to learn. My reason for thinking this is the thought that the best way to learn how to prevent someone from doing what you don't want them to is to learn what they're capable of doing. If this is the case, then this poses another question: How would you go about learning to hack in an ethical manner? I definitely don't want to break laws or cause harm in my quest. Thanks for the input!

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >