Search Results

Search found 27151 results on 1087 pages for 'end'.

Page 403/1087 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • LWJGL glRotatef() without rotating axes?

    - by Brandon oubiub
    Okay so, I noticed when you rotate around an axis, say you do this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); That will rotate things 90 degrees around the x-axis. However, it also sort of rotates the y and z axes as well. So now the y-axis is pointing in and out of the screen, instead of up and down. So when I try to do stuff like this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); glRotatef(whatever, 0.0f, 1.0f, 0.0f); glRotatef(whatever2, 0.0f, 0.0f, 1.0f); The rotations around the y and z-axes end up not how I want them. I was wondering if there is any way I can sort of rotate just the axes back to their initial position after using glRotatef(), without rotating the object back. Or something like that, just so that when I rotate around the y-axis, it rotates around a vertical axis.

    Read the article

  • GPT Not mounting using "normal" GPT mounting techniques 12.04

    - by Roy Markham
    I've got two 2TB drivess: one MBR and the other GPT. sudo blckid /dev/sdb1 returns a blank. gdisk shows: Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Warning! Secondary partition table overlaps the last partition by 1970 blocks! You will need to delete this partition or resize it in another utility. Disk /dev/sdb: 3907027055 sectors, 1.8 TiB Logical sector size: 512 bytes Disk identifier (GUID): 38A1113D-B5E9-4B69-ABFF-ACB27AFB3DDD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 3907027021 Partitions will be aligned on 8-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 34 262177 128.0 MiB 0C01 Microsoft reserved part 2 264192 3907028991 1.8 TiB 0700 Basic data partition mounting via fstab or -t gives same error when using NTFS or NTFS-3g "NTFS signature is missing" GParted says one partition is overwriting another, yet windows shows no errors at all. The drive is also mounted easily via MacOs (triple boot)

    Read the article

  • What is involved for a simple UDP game?

    - by acidzombie24
    I once tried to write a simple game with UDP in a week as a throwaway test. It went horribly. I threw it away early. The main problem i had was restoring the game state of all players/enemies/objects to an old state and fast forward the game to the point of time the player is playing (ie half a second before a jump. A little early or late can make the player miss the jump) Maybe this method is not the easiest way? i suspect it to be but i designed it wrong from the beginning and realized at the end of 2nd day. (so i didnt learn too much or wasted that much time) For myself and others, What is involved for a simple UDP game and how do i write one? Or how do i solve the prediction problem restoring to state properly. I'll mark this as CW bc i know there will be lots of helpful answers.

    Read the article

  • ntfsresize volume and size information

    - by antonio
    I am going to resize my sda2 NTFS partition. When gathering info with ntfsresize, I get: ntfsresize --info /dev/sda2 ntfsresize v2013.1.13 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 21999993344 bytes (22000 MB) Current device size: 23622320128 bytes (23623 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 10673 MB (48.5%) Collecting resizing constraints ... You might resize at 10672590848 bytes or 10673 MB (freeing 11327 MB). Please make a test run using both the -n and -s options before real resizing! Can you tell me what is the difference between volume and device size? As for device size, 23622320128 bytes / 1000^2 = 23622.3 MB. Why is 23623 MB reported instead of 23622? Note that parted confirms this value: parted /dev/sda2 unit MB p Model: Unknown (unknown) Disk /dev/sda2: 23622MB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00MB 23622MB 23622MB ntfs

    Read the article

  • Feeling a bit... under-challenged in my university course

    - by Corey
    I'm currently a sophomore at my university, majoring in Computer Science. Obviously, there are some programming courses as part of my curriculum. However, I'm feeling very underwhelmed by its progress. I've self-taught myself a lot and like to code in my spare time as a hobby. I'm currently in Computer Science II. I never took CS 1 because it seemed rather basic -- I asked someone in the department if they would override my CS 1 requirement if I passed their final (which I did with flying colors). Anyway, the class is going by quite slowly. It seems like the rest of the class has a hard time understanding some basic concepts, which the professor needs to keep going over to help them understand. Is this normal? Looking at the class schedule, I seem to know everything except for one or two things near the very end of the semester. Is there a different perspective I can look at this through so it doesn't seem so boring?

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • Azure Blob storage defrag

    - by kaleidoscope
    The Blob Storage is really handy for storing temporary data structures during a scaled-out distributed processing. Yet, the lifespan of those data structures should not exceed the one of the underlying operation, otherwise clutter and dead data could potentially start filling up your Blob Storage Temporary data in cloud computing is very similar to memory collection in object oriented languages, when it's not done automatically by the framework, temp data tends to leak. In particular, in cloud computing,  it's pretty easy to end up with storage leaks due to: Collection omission. App crash. Service interruption. All those events cause garbage to accumulate into your Blob Storage. Then, it must be noted that for most cloud apps, I/O costs are usually predominant compared to pure storage costs. Enumerating through your whole Blob Storage to clean the garbage is likely to be an expensive solution. Lokesh, M

    Read the article

  • Build one to throw away vs Second-system effect

    - by m3th0dman
    One one hand there is an advice that says "Build one to throw away". Only after finishing a software system and seeing the end product we realize what went wrong in the design phase and understand how we should have really done it. On the other hand there is the "second-system effect" which says that the second system of the same kind that is designed is usually worse than the first one; there are many features that did not fit in the first project and were pushed into the second version usually leading to overly complex and overly engineered. Isn't here some contradiction between these principles? What is the correct view over the problems and where is the border between these two? I believe that these "good practices" are were firstly promoted in the seminal book The Mythical Man-Month by Fred Brooks. I know that some of these issues are solved by Agile methodologies, but deep down, the problem is still the principles still stand; for example we would not make important design changes 3 sprints before going live.

    Read the article

  • Lower Your Application Infrastructure Costs w/Oracle Database 11g

    - by john.brust
    Oracle Database 11g is designed to support enterprise applications, including Oracle E-Business Suite, Oracle PeopleSoft, and Oracle Siebel. And every Oracle customer can benefit from the performance, reliability, and security that Oracle Database 11g brings to these applications. Plus, Oracle Database 11g, helps you drive down your IT infrastructure costs. Join us next Friday for a webcast conversation with database expert Mark Townsend, Vice President of Oracle's Server Technology Division, to learn how you can benefit from running your applications on Oracle Database 11g. At the end of the presentation, we'll open up for live Q&A for approximately 30 minutes. Register now for our Friday, April 23rd, 2010 9:30am PT | 12:30pm ET live webcast.

    Read the article

  • SQLAuthority News Professional Development andCommunity

    I was recently invited by Hyderabad Techies to deliver a keynote for their 16-day online session called TECH THUNDERS. This event has been running from May 15 and will continue up to the end of the month May 30). There would be a total of 30 sessions. In every evening of those 16 day, there [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to store character moves (sprite animations)?

    - by Saad
    So I'm thinking about making a small rpg, mainly to test out different design patterns I've been learning about. But the one question that I'm not too sure on how to approach is how to store an array of character moves in the best way possible. So let's say I have arrays of different sprites. This is how I'm thinking about implementing it: array attack = new array (10); array attack2 = new array(5); (loop) //blit some image attack.push(imageInstance); (end loop) Now every time I want the animation I call on attack or attack2; is there a better structure? The problem with this is let's say there are 100 different attacks, and a player can have up to 10 attacks equipped. So how do I tell which attack the user has; should I use a hash map?

    Read the article

  • Is it possible to create a single tokenizer to parse this?

    - by Adrian
    This extends off this other Q&A thread, but is going into details that are out of scope from the original question. I am generating a parser that is to parse a context-sensitive grammar which can take in the following subset of symbols: ,, [, ], {, }, m/[a-zA-Z_][a-zA-Z_0-9]*/, m/[0-9]+/ The grammar can take in the following string { abc[1] }, } and parse it as ({, abc[1], }, }). Another example would be to take: { abc[1] [, } and parse it as ({, abc[1], [,, }). This is similar to the grammar used in Perl for the qw() syntax. The braces indicate that the contents are to be whitespace tokenized. A closing brace must be on its own to indicate the end of the whitespace tokenized group. Can this be done using a single lexer/tokenizer, or would it be necessary to have a separate tokenizer when parsing this group?

    Read the article

  • Do we need use case levels or not?

    - by Gabriel Šcerbák
    I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows and at the end of his book mentions, that at least two levels areneeded most of the time. My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?

    Read the article

  • What should my "code sample" look like?

    - by thesunneversets
    I've just had quite a good phone interview (for a CakePHP-related position, not that it's especially important to the question). The interviewer seemed to be impressed with my resume and personality. At the end, though, he asked me to email him a code sample from my existing work project, "to check you're not secretly a terrible programmer, ha ha!" I'm not too worried that my code can't stand on its own two feet, but I'm very much an intermediate programmer rather than an expert. What obvious pitfalls should I make sure my code sample doesn't fall into, in case they rule me out on the spot? Secondly, and this is probably the harder part of the question to answer, what features in a code sample would be so impressive that they would instantly make you much more favourably inclined towards the programmer? All ideas or suggestions welcomed!

    Read the article

  • The best Windows 7 virtual desktop tool by far&hellip; Dexpot

    - by Eric Nelson
    [Oh – and Windows XP, Vista etc] Every so often I yearn for the virtual desktop functionality that is implemented so well under Linux. Unfortunately every time I start looking for a great tool for Windows I ultimately end up disappointed. But … I think this time around I have actually found one that will outlast the first day or two and become a must have. Check out http://www.dexpot.de/ So far this is 100% stable, 100% sensible and offers awesome functionality, yet still is very simple to use. There is a detailed look at the many features on the site but a couple that do it for me: Desktop Manager and next/previous tray icons make it easy to navigate around: Announcement of Desktop as a desktop takes focus: And best of all, Windows 7 preview integration And… it is FREE for private use and you get 30 days to try it out for professional use (e.g. me)

    Read the article

  • Secure login for a game that is open source

    - by David Park
    I am making a game which i will be open sourcing. Its a simple arcade like game but requires a network connection because it is meant to be played with other people. The thing i am worrying about is how would i be sure that the client is the one that i put out for the end user to play with? Kind of a like of sv_pure for Team Fortress 2. I was thinking of different ways to combat this such as the server requesting the client's version or even it's md5 hash but people with simple java knowledge could just force a method to always return what the server wants.

    Read the article

  • Is there an alternative to javascript for the web that can do multi-threading and synchronous execution?

    - by rambodash
    I would like to program web applications as I do with desktop programming languages, where the code is synchronously executed and browser doesn't freeze when doing loops. Yes I know there are workarounds using callbacks and setTimeout, but they are all workarounds after all and they don't give the same flexibility when programming in the orthodox way I've been looking at Dart as a possibilty, but I can't seem to find where it says it can do either of these. The same with haxe, emscript, and the hundreds of other converters that try to circumvent javascript. In the end it gets converted to Javascript so you ultimately have to be conscious about asynchronous/multi threading.

    Read the article

  • Microdata Without Reviews

    - by user36562
    I have not been able to find a clear issue on omitting reviews from Microdata. I understand that Microdata values for reviews will default to a certain number when omitted, but I was wondering if it would be correct/acceptable to completely omit the review node completely. I can see where reviews and average "star" ratings would be of help to the end user, especially for things like recipes. However, what if there are no reviews for a product or application? To be completely clear - let's isolate this question to only software applications or extensions. What if a particular piece of software or extension was not featured on an "app store" or other site that provided reviews? Wouldn't the formats still be helpful by providing version number, download URL, compatible software - etc? Sorry for the lengthy background, but I just don't understand why it seems that reviews must be part of a Microdata markup. Or, am I wrong in this assumption?

    Read the article

  • How do I programatically determine which port a SQL Server is running on?

    - by Ralph Willgoss
    How do I programatically determine which port a SQL Server is running on?/*Wrapper script for xp_readerrorlogAuthor: Ralph WillgossDate: 2nd Oct 2012This script cycles through all logs files, looking for the listening port.Normally you have to specify the log file one by one, the script removes the need for that.Param ref for: xp_readerrorlog1. Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...2. Log file type: 1 or NULL = error log, 2 = SQL Agent log3. Search string 1: String one you want to search for4. Search string 2: String two you want to search for to further refine the results5. Search from start time6. Search to end time7. Sort order for results: N'asc' = ascending, N'desc' = descending*/USE MasterGO--  Get log countDECLARE @logcount intDROP TABLE #ResultCREATE TABLE #Result (ArchiveNo int, Date datetime, Size int)INSERT INTO #ResultEXEC xp_enumerrorlogsSET @logcount = (SELECT COUNT(*) FROM #Result)-- Search the available logsDECLARE @counter intSET @counter = 0WHILE @counter <= @logcountBEGIN   EXEC xp_readerrorlog @counter, 1, N'Server is listening on', 'any', NULL, NULL, N'asc'   SET @counter = @counter + 1ENDGO

    Read the article

  • looking for information about HP openview servicedesk api or understanding an api without any information about one

    - by Zagorulkin Dmitry
    Good day folks. I am very confused in this situation. I need to implement system which will be based on HP open view service desk 4.5 api. But this system are reached the end of supporting period. On oficial site no information available I am looking an information about this API(articles, samples etc). Now i have only web-api.jar and javadoc. Methods in javadoc is bad documented. If you have any info, please share it with me. Thanks. Second question: there are methods for api(with huge amount of methods) understanding if it not documented or information is not available? PS:If it question is not belong here i will delete it.

    Read the article

  • How do I inject test objects when the real objects are created dynamically?

    - by JW01
    I want to make a class testable using dependency injection. But the class creates multiple objects at runtime, and passes different values to their constructor. Here's a simplified example: public abstract class Validator { private ErrorList errors; public abstract void validate(); public void addError(String text) { errors.add( new ValidationError(text)); } public int getNumErrors() { return errors.count() } } public class AgeValidator extends Validator { public void validate() { addError("first name invalid"); addError("last name invalid"); } } (There are many other subclasses of Validator.) What's the best way to change this, so I can inject a fake object instead of ValidationError? I can create an AbstractValidationErrorFactory, and inject the factory instead. This would work, but it seems like I'll end up creating tons of little factories and factory interfaces, for every dependency of this sort. Is there a better way?

    Read the article

  • TTS on App Engine

    - by yati sagade
    I have written a small front-end to the Festival TTS system using Python/Django. I wish to deploy it on the Google App Engine cloud. A few questions: My application uses the Festival app 'text2wave'. Will is work on the cloud? I have used Python primitives like subprocess.call() to invoke the aforementioned program. Will that work? If your answer to any or both of (1) and (2) is no, is there a free api on the web that I can use (from the appengine)? I read somewhere about placing calls from Phono to a Voxeo backend, but I'm not sure what that means. I am aware of the Google Translate extension that allows translation using an HTTP GET (REST) request, but here the text is limited to 100 chars. Bad. Plus, they may take it down any point of time.

    Read the article

  • Goals for 2010 Retrospective

    - by Brian Jackett
    As we approach the end of 2010 I’d like to take a  few minutes to reflect back on this past year and revisit the goals that I set for myself at the beginning of the year (click here to see those goals).  I feel it is important to track your goals not only to see if you accomplished them but also to see what new directions in life you pursued.  Once we enter into 2011 I’ll follow up with a new post on goals for the new year. Professional Blog – This year I intended to write at least 2 posts a month.  Looking back I far surpassed that goal by writing 47 posts (this one being my 48th).  As with many things in life, quantity does not mean quality.  A good example is a number of my posts announcing upcoming speaking engagements and providing links to presentation slides and scripts.  That aside, I like to at least keep content relatively fresh on this blog  which I was able to accomplish.  At the same time I’ve gotten much more comfortable in my blogging style and it has become much easier to write. Speaking – I didn’t define a clear goal for speaking engagements, but had a rough idea of wanting to speak at 2-3 events.  Once again I far exceeded that number by speaking at 10 separate events and delivering 12+ presentations.  I’m very thankful for all of the opportunities that I was given and all of the wonderful people I have met as a result. Volunteering – This year I intended to help out with the COSPUG (now Buckeye SPUG) steering committee and Stir Trek conference.  I fulfilled both goals and as well as taking on lead organizer duties for the first ever SharePoint Saturday Columbus.  Each of these events and groups turned out to be successful and I was glad to be a part of them all.  I look forward to continuing to volunteer with each next year in some capacity. Android Development – My goal for getting into Android development was a late addition, but one I didn’t necessarily fulfill.  I spent a couple nights downloading the tools, configuring my environment, and going through some “simple” tutorials.  I say “simple” because in my opinion the tutorials were not laid out very well, took a long time to get running properly, and confused me more than helped.  After about a week I was frustrated with the process and didn’t think it was a good use of my time.  On a side note, I’ve dabbled in Windows Phone 7 development over the past few months and have been very excited by how easy and intuitive it was to get started and develop some proof of concepts. Personal Getting in Shape – I had intended to play on recreational sports leagues and work out on a semi-regular basis.  For the most part I fulfilled this goal by playing on various softball and volleyball leagues as well as using the gym.  At the same time I had some major setbacks.  In the spring I badly sprained my ankle and got hit in the knee with a softball which kept me inactive for almost 2 months.  More recently I broke my knuckle (click here to read about it) which I am still recovering from. Volunteering – On the volunteering front I kept my commitments at my parish’s high school youth group.  As for other volunteering opportunities I got involved with a great organization called Columbus Gives Back (website).  I’ve volunteered with them a few times and really enjoy their goal to provide opportunities to people with busy schedules.  They  offer a variety of events typically after work hours and spread out around Columbus with no set commitments on time you need to put in.  If you have the time or motivation I highly recommend them. House/Condo – I had been thinking of buying a house or condo this past summer, but decided to extend my apartment lease for another year instead.  I have begun the search for a place in the past few weeks and am excited begin the process of owning a home. Conclusion     This year I was able to set and achieve many of my goals.  For next year I’ll try to put more specific numbers to all of my goals.  If any of you readers set goals for 2011 feel free to send me a link as I’d love to see what you are aiming to accomplish.  Have a great end of 2010 and best wishes for the start of 2011!       -Frog Out

    Read the article

  • Is there something better than a StringBuilder for big blocks of SQL in the code

    - by Eduardo Molteni
    I'm just tired of making a big SQL statement, test it, and then paste the SQL into the code and adding all the sqlstmt.append(" at the beginning and the ") at the end. It's 2011, isn't there a better way the handle a big chunk of strings inside code? Please: don't suggest stored procedures or ORMs. edit Found the answer using XML literals and CData. Thanks to all the people that actually tried to answer the question without questioning me for not using ORM, SPs and using VB edit 2 the question leave me thinking that languages could try to make a better effort for using inline SQL with color syntax, etc. It will be cheaper that developing Linq2SQL. Just something like: dim sql = <sql> SELECT * ... </sql>

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >