Search Results

Search found 18096 results on 724 pages for 'let'.

Page 279/724 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Why can't I create a public folder?

    - by Bryan
    I have a need to create a new Exchange public folder, as a sub folder to a folder that I'm already the owner of. When ever I try (from Outlook 2007) to create a new public folder, I'm told I don't have persmission. Outlook doesn't let me view the permissions of this folder, however it lets me view permissions of other folders that I'm owner of. Both ESM and PFDAVAdmin both report that my regular (i.e. non domain admin account) is the owner of the folder. Our set up is as follows: Exchange 2003 running on Server 2003, Windows 2008 R2 domain. Windows XP Desktop, Outlook 2007. Everything fully patched. What am I doing wrong?

    Read the article

  • OpenGL - What softwares are needed to learn Open GL on Osx

    - by sugar
    I strongly believe that - my question isn't related to programing, so that I asked my question here ( super user ). Here, I am not asking what links should I follow to learn OpenGL. I am asking that - What software are needed for learning Open Gl? Let me explain more briefly, I want to learn openGL for iPhone game development. Tools for creating 3D objects which are preferable tools are targeted for osx would be preferable. In short I would like to know, The list of application which are used by iPhone professional game developers on osx. Thanks in advance for sharing your knowledge. Please add comment before down voting. Sagar.

    Read the article

  • How to search for a tester?

    - by MainMa
    As a freelance developer, a few times I tried to find some testers to be able to let them test my software/web applications. If I try to find them, it's because most of the customers are not intended to hire external testers and don't see why this can benefit to them, so products are UI-untested and buggy. I tried lots of things. Discussion boards for IT people, specific websites for people who search for a job. Every time I clearly precise that I'm looking for product testers. I completely failed to find anybody for this job. I found instead two types of people: Non IT people who try to qualify as testers, but don't have enough skills for that, and don't really know what testing is and how to do it, Programmers, who are skilled as programmers, but not as testers, and who mostly don't understand neither what testing is about (or think it's the same thing as code review, or it consists in writing unit tests). Of course, they submit general programmers resumes, where they describe their high experience in Assembler and C++, but don't tell anything about anything related to the job of a tester. What I'm doing wrong? Isn't it called "tester"? Is there at least a tester job, different from general programming job? Is there any precise requirement to require from each candidate which can eliminate non IT people and general programmers?

    Read the article

  • Java issues on OpenVZ Ubuntu 11.04 (.jar/.sh files)

    - by IWillNotChange
    I've had a whole line of messes with java and .jar files. I've tried both OpenJDK (from software installer) and about three repositories for Sun. /Desktop# java -jar -Xmx1024m ss.jar Exception in thread "main" java.awt.HeadlessException at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:173) at java.awt.Window.<init>(Window.java:476) at java.awt.Frame.<init>(Frame.java:419) at java.awt.Frame.<init>(Frame.java:384) at javax.swing.JFrame.<init>(JFrame.java:174) at org.powerbot.bd.<init>(Unknown Source) at org.powerbot.Boot.main(Unknown Source) Two separate errors: ~/Desktop# ./ss.sh [SEVERE] org.server.Boot: Default heap size of 490m too small, restarting with 768m and about 30 different crashes were it just "aborts" with a huge file dump. Each time I've tried something a little different, whether it be updating Java or just changing -Xmx1024 to -Xmx1024m to get rid of the heap. Personally I think it has something to do with OpenVZ, but Google hasn't saved me this time, I need someone who can get to the bottom of my problem. java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) is my current install. Running ss.sh gives me: (I'd post the entire log but its long) # # A fatal error has been detected by the Java Runtime Environment: # # SIGILL (0x4) at pc=0x00002b14278e6fa0, pid=9301, tid=47365590714112 # # JRE version: 6.0_26-b03 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.1-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [ld-linux-x86-64.so.2+0x14fa0] _dl_make_stack_executable+0x2b50 # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # I'm willing to let someone who knows what they are talking about view it and try and sort this out. Any help would be appreciated, I've about pulled all my hair Googling to no avail.

    Read the article

  • How can I support objects larger than a single tile in a 2D tile engine?

    - by Yheeky
    I´m currently working on a 2D Engine containing an isometric tile map. It´s running quite well but I'm not sure if I´ve chosen the best approach for that kind of engine. To give you an idea what I´m thinking about right now, let's have a look at a basic object for a tile map and its objects: public class TileMap { public List<MapRow> Rows = new List<MapRow>(); public int MapWidth = 50; public int MapHeight = 50; } public class MapRow { public List<MapCell> Columns = new List<MapCell>(); } public class MapCell { public int TileID { get; set; } } Having those objects it's just possible to assign a tile to a single MapCell. What I want my engine to support is like having groups of MapCells since I would like to add objects to my tile map (e.g. a house with a size of 2x2 tiles). How should I do that? Should I edit my MapCell object that it may has a reference to other related tiles and how can I find an object while clicking on single MapCells? Or should I do another approach using a global container with all objects in it?

    Read the article

  • All my folders and files on my flash drive have been renamed automatically and I can no longer open them... I need those files

    - by jennifer
    I opened up my flash drive this morning and all of my folders and files are normal, except for one folder and all its included files, which is the most important folder. The subfolders and files are renamed with bizarre characters and when I click to open them, a pop-up appears saying it's not accessible and the filename or directory name is incorrect. I don't want to reformat the flash drive because I'd lose all those files. Is there a way for me to restore it or something? I would attach a screen shot, but apparently new users do not have that privilege. If you have a vague idea of what I'm talking about, let me know and I can email you screenshots so you can have a better understanding. Any help is greatly appreciated!

    Read the article

  • is there a small portable linux with good development environment?

    - by Sriram
    let me put it this way..! i use windows/ my company wants me to use windows i like Linux i don't want to use cygwin i want a simple portable Linux with a development environment aka( make,gcc,g++,llvm,...) with a bash and vi is enough for me no need any gui. these 4 points never change. ;) i tried damn small Linux.. its awesome but it doesn't have what i need. so is there a portable Linux distribution that i can run from windows using qemu or something with a good up2date development environment? thanks in advance

    Read the article

  • Coding Dynamic Events?

    - by Joey Green
    I have no idea what the title of this question should be so bare with me. My game has turns. On a turn a player does something and this can result in a random number of explosions that occur at different times. I know when the explosions are done. I need to know when ALL are done and then do some other action. Also, each explosion is the same amount of time, say 3 seconds.. Right now I'm thinking of using a counter to hold how many explosions are happening. Then once the explosion is finished decrement this counter. Once the counter is zero, do my action. This idea is inspired by objective-c memory management btw. Anyways, does this sound like a good approach or would there be another way. An alternative might be to figure out the explosion who happened last and let it be responsible for calling this subsequent action. I'm asking mostly, because I haven't done this before and am trying to figure out if there are bugs that may occur that I'm not foreseeing.

    Read the article

  • Is this kind of design - a class for Operations On Object - correct?

    - by Mithir
    In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • Duplicating someone's content legitimately & writing HTML to support that

    - by Codecraft
    I want to add content from other blogs to my own (with the authors permission) to help build additional relevant content and support articles I've found useful that others have written. I'm looking into how to do this responsibly - ie, by giving the original content author a boost and not competing against them for search traffic which should go to their site. In order to keep my duplicate content out of search, and to hint to the search engines where the original content is to be found i've implemented: <head> <meta name='robots' content='noindex, follow'> <link rel='canonical' href='http://www.originalblog.com/original-post.html' /> </head> Additionally, to boost the original article and to let readers know where it came from i'll be adding something like this: <div> Article originally written by <a href='http://www.authorswebsite.com'>Authors Name</a> and reproduced with permission.<br/> <a href='http://www.originalblog.com/original-post.html' target='new'> Read the original article here. </a> </div> All that remains is a way to 'officially' credit the original author in the HTML for the search spiders to see. Can anyone tell me a way to do this possibly using rel="author" (as far as I can see thats only good for my own original content), or perhaps it doesn't matter given that the reproduced pages will be kept out of search engines? Also, have I overlooked anything in the approach?

    Read the article

  • Monitor the shell activity of a user on your Unix system?

    - by Joseph Turian
    Trust, but verify. Let's say I want to hire someone a sysadmin, and give them root access to my Unix system. I want to disable X windows for them, only allow shell usage (through SSH, maybe), so that all operations they perform will be through the shell (not mouse operations). I need a tool that will log to a remote server all commands they issue, as they issue them. So even if they install a back door and cover their tracks, that will be logged remotely. How do I disable everything but shell access? Is there a tool for instantaneously remotely logging commands as they are issued?

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Serve up PC hard drive as USB mass storage

    - by sheepsimulator
    Is there a software package available that can serve up a hard-drive internal to a PC and make it available over USB to other USB Master nodes as mass storage? Ex: take your C: or /dev/hda drive on a PC (let's call the computer PC-A), and run a driver program which makes your C: or /dev/hda drive available to external devices as USB mass storage. When you'd hook up another PC (PC-B) to PC-A via USB, it would detect a USB mass storage device, which is C: or /dev/hda on PC-A. Is this even possible? EDIT: I know that there are other ways of making data on a drive available between two different computers (eg. putting PC-A's hdd in a USB-drive-enclosure, or having PC-A make the hdd available via a network share). But I'd like to know if the method that I describe above is even technically possible.

    Read the article

  • File Saving Sometimes Fails

    - by YellPika
    When I attempt to save files, it sometimes (randomly) fails. In Blender, I sometimes get "Version Backup Failed: File Saved With @". In Visual Studio, building sometimes fails with an error message indicating that the target file/exe cannot be overwritten. If I wait a bit, I can save fine. It's almost as if programs are taking an abnormal amount of time to 'let go' of the files. What could be causing this behaviour? This seems to be caused by Windows Live Mesh monitoring my files, and locking them whenever it uploads the new versions (BAD considering the amount of times I save my files, even redundantly). Any suggestions to work around this behaviour? Should I switch to a better service to sync my files?

    Read the article

  • Re-streaming RTMP stream

    - by Yvan JANSSENS
    I have a set of local RTMP stream servers in my network, but I want them to be reachable outside. The bandwidth is too narrow to serve multiple clients on the streamservers of my network, so the idea is to pull the local RTMP streams on a computer serving as a gateway, which pushes them on his turn to a hosted streaming provider. It is not possible to let the sources of the stream push their stream directly to the server outside due to network policy restrictions. Scheme of what I'm trying to accomplish: Internal network | External network ------------ ------------ ----------------------- | internal | <---- | Gateway | ------> | streamserver outside| | streams | ------------ ----------------------- ------------ | ^ | | | ----------- | | clients | | ----------- My question now is: which application which can pull a live stream from an RTMP source (Flash Media Server) and push it to another one (Flash Media Server at hosting provider).

    Read the article

  • best practice with memcache/php - multi memcache nodes

    - by user62835
    So I am working on a web app - that has to be built for scalability. It stores frequent MySQL querys into the cache. I have pretty much everything built and ready to go - but I am concerned on best practices on handling where to cache the data. I've talked to a few people and one of them suggested to split each key/value across all the memcache nodes. Meaning if i store the example: 'somekey','this is the value' it will be split across lets say 3 memcache servers. Is that a better way? or is memcache more built on a 1 to 1 relationship?. For example. store value on server A till it faults out - go to server B and store there. that is my current understanding from the research I have done and past experience working with memcache. Could someone please point me in the right direction in this and let me know which way is best or if I completely have this mixxed up. Thanks

    Read the article

  • IIM Calcutta &ndash; EPBM 14 &ndash; Campus Visit &ndash; Day 1 &ndash; Registration &amp; Beginning

    - by Ram Shankar Yadav
    Hey Guys! I’m back with the updates, it was an awesome Monday morning, for me it started when Sun came on my face, and the time was 5:30AM~~ I was amazed that this part of the country gets the sunrise quite early, but I ignored the sunlight for a while by covering my face, but…finally the door knocked….~ It was Mukesh, and the time was 6 AM, so I thought let’s get rid of laziness and start my day~ After having my brush and bath, I shaved and we headed for the Breakfast~ We quickly had our bread butter jam combo, and left for the Auditorium for Registration~ We searched for our names and signed the Registration paper and got a cool IIM C bag, with following in it: - a IIMC Notepad - Cello X Caliber pen - a book “What the Best MBAs Know”, and - Reading Material for Campus Sessions Today we had lectures on “Evolution of Indian Corporate Sector” (2 Session of 1.5 hrs each) and “Indian Economy: Crisis & Response” (2 Sessions of 1.5 hrs). “Evolution of Indian Corporate Sector” was by Prof. Raghabendra Chattopadhyay, was one of my best lectures I’ve ever attended in my life, he started with a question that saying that “The Indian Capitalists didn’t wanted the economy to open up till the economic reforms occurred?”, he is one of the best story tellers I’ve ever met, he started with the ancient European and Indian history and linked the trade & economics with it, simply amazing~ I can’t believe I didn’t get bore even after a 2hour long session…awesome~~ Afterward we had our lunch break, we did our lunch in “New Hostel” building and got back for “Indian Economy” sessions. Indian Economy session was taken by Sudip Chaudhuri, for us he’s a well known face as we have already attended his sessions on Macroeconomics~ It was an interactive, easy going, and a laughable session, and we did discussed some serious issues as well. After the class got over we went out and got few T-Shirts and Mugs for ourselves, and yep not to forget it “Rained” in Kolkata today~~ We got back and had our dinner and dispersed finally… I loved this amazing Monday, and hope the spirit continues till Saturday~ I’m feeling the enrichment in my thought and perceptions~ I’m lovin’ it~~ ram :)

    Read the article

  • Is SQL Azure a newbies springboard?

    - by jamiet
    Earlier today I was considering the various SQL Server platforms that are available today and I wondered aloud, wonder how long until the majority of #sqlserver newcomers use @sqlazure instead of installing locally Let me explain. My first experience of development was way back in the early 90s when I would crank open VBA in Access or Excel and start hammering out some code, usually by recording macros and looking at the code that they produced (sound familiar?). The reason was simple, Office was becoming ubiquitous so the barrier to entry was incredibly low and, save for a short hiatus at university, I’ve been developing on the Microsoft platform ever since. These days spend most of my time using SQL Server. I take a look at SQL Azure today I see a lot of similarities with those early experiences, the barrier to entry is low and getting lower. I don’t have to download some software or actually install anything other than a web browser in order to get myself a fully functioning SQL Server  database against which I can ostensibly start hammering out some code and I believe that to be incredibly empowering. Having said that there are still a few pretty high barriers, namely: I need to get out my credit card Its pretty useless without some development tools such as SQL Server Management Studio, which I do have to install. The second of those barriers will disappear pretty soon when Project Houston delivers a web-based admin and presentation tool for SQL Azure so that just leaves the matter of my having to use a credit card. If Microsoft have any sense at all then they will realise the huge potential of opening up a free, throttled version of SQL Azure for newbies to party on; they get to developers early (just like they did with me all those years ago) and it gives potential customers an opportunity to try-before-they-buy. Perhaps in 20 years time people will be talking about SQL Azure as being their first foray into the world of coding! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Netretail's online retail operation benefits from personal contact

    - by christopher.jones
    Hot on oracle.com is a snapshot of Netretail Holding B.V. profiling their use of PHP and Oracle technology such as Oracle RAC cluster database to become a leading online retailer across Central and Eastern Europe. We've also just refreshed our key PHP Scalability and High Availability whitepaper which talks about connection pooling (DRCP) and Fast Application Notification (FAN). We brought it up to date for 11gR2 and PHP 5.3. It now includes the new 11gR2 V$CPOOL_CONN_INFO view, the new columns for DBA_CPOOL_INFO, information about LOGOFF triggers, and information about the support for Client Result Caching with DRCP. Back to Netretail. Two of their secrets to success are keeping technically up to date, and networking. That is, networking in the business sense. I had the pleasure of meeting Michal Táborský (@whizz), the Chief System Architect, when he was in California for a Velocity conference. Michal took time to visit Oracle HQ and talk with our developers about his then current architecture and future needs. I also met his manager at last year's Oracle OpenWorld conference. Having built up a relationship with us, Netretail now has access to Oracle Development staff. While this will never bypass Oracle Support (which have tools, systems etc that are needed and useful for resolving issues), it makes communication easier for some classes of questions. It helps discussions that will let us improve Oracle products, and make Netretail stronger. I like this. And there's no reason why you can't talk with us too. You know where to email me.

    Read the article

  • What is your unique programming problem-solving style? [closed]

    - by gcc
    Everyone has their own styles and technique for approaching and solving real world problems. These distinguish us from other people or other programmers. (Actually, I think it make us more desirable as programmers and improves computer science) To improve, we read a lot of books; for example, programming style, how to solve problems, how to approach problems, software and algorithms, et al. Can I learn your technique? In other words, if someone gives you a problem, at first step, what are you doing to solve it? I want learn the style in which you approach, analyze, and solve a problem. EDIT: every programmer is a unique instance; each of us approach problems and converge on solutions in our own... idiomatic manner. This manner is sometimes a quirk of training, a bias of tools, but often it is an insightful nugget, a little golden hammer that cracks nuts just slightly faster then others. When answering, give your general approaches but also take a moment to identify how you look at things in ways that your peers do not. Let's call this your Unique Solving Perspective, or USP.

    Read the article

  • Who writes the words? A rant with graphs.

    - by Roger Hart
    If you read my rant, you'll know that I'm getting a bit of a bee in my bonnet about user interface text. But rather than just yelling about the way the world should be (short version: no UI text would suck), it seemed prudent to actually gather some data. Rachel Potts has made an excellent first foray, by conducting a series of interviews across organizations about how they write user interface text. You can read Rachel's write up here. She presents the facts as she found them, and doesn't editorialise. The result is insightful, but impartial isn't really my style. So here's a rant with graphs. My method, and how it sucked I sent out a short survey. Survey design is one of my hobby-horses, and since some smartarse in the comments will mention it if I don't, I'll step up and confess: I did not design this one well. It was potentially ambiguous, implicitly excluded people, and since I only really advertised it on Twitter and a couple of mailing lists the sample will be chock full of biases. Regardless, these were the questions: What do you do? Select the option that best describes your role What kind of software does your organization make? (optional) In your organization, who writes the text on your software user interfaces? (for example: button names, static text, tooltips, and so on) Tick all that apply. In your organization who is responsible for user interface text? Who "owns" it? The most glaring issue (apart from question 3 being a bit broken) was that I didn't make it clear that I was asking about applications. Desktop, mobile, or web, I wouldn't have minded. In fact, it might have been interesting to categorize and compare. But a few respondents commented on the seeming lack of relevance, since they didn't really make software. There were some other issues too. It wasn't the best survey. So, you know, pinch of salt time with what follows. Despite this, there were 100 or so respondents. This post covers the overview, and you can look at the raw data in this spreadsheet What did people do? Boring graph number one: I wasn't expecting that. Given I pimped the survey on twitter and a couple of Tech Comms discussion lists, I was more banking on and even Content Strategy/Tech Comms split. What the "Others" specified: Three people chipped in with Technical Writer. Author, apparently, doesn't cut it. There's a "nobody reads the instructions" joke in there somewhere, I'm sure. There were a couple of hybrid roles, including Tech Comms and Testing, which sounds gruelling and thankless. There was also, an Intranet Manager, a Creative Director, a Consultant, a CTO, an Information Architect, and a Translator. That's a pretty healthy slice through the industry. Who wrote UI text? Boring graph number two: Annoyingly, I made this a "tick all that apply" question, so I can't make crude and inflammatory generalizations about percentages. This is more about who gets involved in user interface wording. So don't panic about the number of developers writing UI text. First off, it just means they're involved. Second, they might be good at it. What? It could happen. Ours are involved - they write a placeholder and flag it to me for changes. Sometimes I don't make any. It's also not surprising that there's so much UX in the mix. Some of that will be people taking care, and crafting an understandable interface. Some of it will be whatever text goes on the wireframe making it into production. I'm going to assume that's what happened at eBay, when their iPhone app purportedly shipped with the placeholder text "Some crappy content goes here". Ahem. Listing all 17 "other" responses would make this post lengthy indeed, but you can read them in the raw data spreadsheet. The award for the approach that sounds the most like a good idea yet carries the highest risk of ending badly goes to whoever offered up "External agencies using focus groups". If you're reading this, and that actually works, leave a comment. I'm fascinated. Who owned UI text Stop. Bar chart time: Wow. Let's cut to the chase, and by "chase", I mean those inflammatory generalizations I was talking about: In around 60% of cases the person responsible for user interface text probably lacks the relevant expertise. Even in the categories I count as being likely to have relevant skills (Marketing Copywriters, Content Strategists, Technical Authors, and User Experience Designers) there's a case for each role being unsuited, as you'll see in Rachel's blog post So it's not as simple as my headline. Does that mean that you personally, Mr Developer reading this, write bad button names? Of course not. I know nothing about you. It rather implies that as a category, the majority of people looking after UI text have neither communication nor user experience as their primary skill set, and as such will probably only be good at this by happy accident. I don't have a way of measuring those frequency of those accidents. What the Others specified: I don't know who owns it. I assume the project manager is responsible. "copywriters" when they wish to annoy me. the client's web maintenance person, often PR or MarComm That last one chills me to the bone. Still, at least nobody said "the work experience kid". You can see the rest in the spreadsheet. My overwhelming impression here is of user interface text as an unloved afterthought. There were fewer "nobody" responses than I expected, and a much broader split. But the relative predominance of developers owning and writing UI text suggests to me that organizations don't see it as something worth dedicating attention to. If true, that's bothersome. Because the words on the screen, particularly the names of things, are fundamental to the ability to understand an use software. It's also fascinating that Technical Authors and Content Strategists are neck and neck. For such a nascent discipline, Content Strategy appears to have made a mark on software development. Or my sample is skewed. But it feels like a bit of validation for my rant: Content Strategy is eating Tech Comms' lunch. That's not a bad thing. Well, not if the UI text is getting done well. And that's the caveat to this whole post. I couldn't care less who writes UI text, provided they consider the user and don't suck at it. I care that it may be falling by default to people poorly disposed to doing it right. And I care about that because so much user interface text sucks. The most interesting question Was one I forgot to ask. It's this: Does your organization have technical authors/writers? Like a lot of survey data, that doesn't tell you much on its own. But once we get a bit dimensional, it become more interesting. So taken with the other questions, this would have let me find out what I really want to know: What proportion of organizations have Tech Comms professionals but don't use them for UI text? Who writes UI text in their place? Why this happens? It's possible (feasible is another matter) that hundreds of companies have tech authors who don't work on user interfaces because they've empirically discovered that someone else, say the Marketing Copywriter, is better at it. And once we've all finished laughing, I'll point out that I've met plenty of tech authors who just aren't used to thinking about users at the point of need in the way UI text and embedded user assistance require. If you've got what I regard, perhaps unfairly, as the bad kind of tech author - the old-school kind with the thousand-page pdf and the grammar obsession - if you've got one of those then you probably are better off getting the UX folk or the copywriters to do your UI text. At the very least, they'll derive terminology from user research.

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • How to Generate Spritesheet from a 'problematic' animated Symbol in Flash Pro CS6?

    - by Arthur Wulf White
    In the new Flash Pro CS6 there is an option to generate spriteheet from a symbol. I used these tutorials: http://www.adobe.com/devnet/flash/articles/using-sprite-sheet-generator.html http://tv.adobe.com/watch/cs6-creative-cloud-feature-tour-for-web/generating-sprite-sheets-using-flash-professional-cs6/ And it works really well! An artist I'm working with created a bunch of assets for a game. One of them is a walking person as seen from a top-down view. You can find the .fla here: https://docs.google.com/folder/d/0B3L2bumwc4onRGhLcGNId1p2Szg/edit (If this does not work let me know, it is the first time I used Google Drive to share files) 1 .When I press ctrl+enter I can see it is moving. When I look for the animation, I do not seem to find it. When I select to create a spritesheet, flash suggest creating a spritesheet with one frame in the base pose and no other (animation) frames. What is causing this and how do I correct it? 2 .I want to convert it to a sprite sheet for 32 angles of movement. Is there any magical easy way to get this done? Is there a workaround without using Flash CS6 to do the same thing?

    Read the article

  • How do I load tmx files with Slick2d?

    - by mbreen
    I just started using Slick2D and learned how simple it is to load in a tilemap and display it. I tried atleast a dozen different tmx files from numerous examples to see if it was the actual file that was corrupted. Everytime I get this error: Exception in thread "main" java.lang.RuntimeException: Resource not found: data/maps/desert.tmx at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.tiled.TiledMap.<init>(TiledMap.java:101) at game.Game.init(Game.java:17) at game.Tunneler.initStatesList(Tunneler.java:37) at org.newdawn.slick.state.StateBasedGame.init(StateBasedGame.java:164) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:390) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at game.Tunneler.main(Tunneler.java:29) Here is my Game class: package game; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import org.newdawn.slick.tiled.TiledMap; public class Game extends BasicGameState{ private int stateID = -1; private TiledMap map = null; public Game(int stateID){ this.stateID = stateID; } public void init(GameContainer container, StateBasedGame game) throws SlickException{ map = new TiledMap("data/maps/desert.tmx","maps");//ERROR } public void render(GameContainer container, StateBasedGame game, Graphics g) throws SlickException{ //map.render(0,0); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException{ } public int getID(){return stateID;} } I've tried to see if anyone else has had similar problems but haven't turned up anything. I am able to load other files, so I don't believe it's a compiler issue. My menu class can load images and display them just fine. Also, the filepath is correct. Please let me know if you have any pointers that might help me sort this out.

    Read the article

  • Invoice from Godaddy with intent to defraud?

    - by Berliner
    Hi Webmasters I have received several email asking me to renew a domain name: REMINDER: Renew early for multiple years and lock in your savings! For your review, listed below are domain names and their expiration dates. F.....COM - Mar. 09, 2011 Since I lost the domain name long time ago and couldn't get it back I asked if it was available again. Goddady replyed: According to WHOIS the domain name is registered to a Japanese company with the expiry date: 2011-12-02. I wrote to Godaddy: According to your information the domain holder is a Japanese company as described below. Can you give me an explanation why you send me an email asking me to pay for a domain name which I do not own? (Expiration Date: 2011-12-02) I am just curious, I am sure there is no ill will on your part. Godaddy answered: Dear Sir or Madam, Thank you for contacting online support. This was just to let you know the domain is registered to someone else and who. Then today I got yet another invoice asking me to renew the same domain name once again: **REMINDER: Renew early for multiple years and lock in your savings! The product(s) listed below have expired or are at risk of expiring: Product NameNext Attempt Date.COM Domain Name Renewal - 1 Year (recurring)03/14/2011 F........COM You are at risk of losing the service(s) or product(s) listed above. Your products are currently set to renew manually – they will NOT be renewed automatically on the next attempt date.** The expiry date has now been changed from the 9 of March to the 14 March. Another party owns the domain name and further the domain name was never registered with Godaddy. This appears like a way to make a few buck on a unsuspecting customer, it might even be illegal. Any comment how to take this futher would be most welcome.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >