Search Results

Search found 28643 results on 1146 pages for 'go'.

Page 588/1146 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • Great Indian Developer Summit Wrap-Up

    Last week I spoke at the Great Indian Developer Summit in Bangalore, India. This was my second year speaking at GIDS, so it was great to be back. Before the event Teleriks Team Fantastic Four set up the booth and then hit McDonalds for a Maharaja Mac. Remember India does not eat beef, so we HAD to go to McDonalds and check it out! Imagine a McDonalds without a hamburger. Totally awesome. (Though we all preferred the McAloo, a potato patty sandwich.) The event is really 4 conferences in 4 days. One day each on: .NET, Web, Java, and Seminars. On the Day 1 (.NET) I spoke on: Building Data Warehouses Building Applications with Silverlight and .NET (and sharing the business logic) What's new in SQL Server 2008 R2 No computer malfunctions like last year, my sessions went smooth. This is rapid fire presenting: only 50 minute sessions! With so little time, I had almost ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Android threads trouble wrapping my head around design

    - by semajhan
    I am having trouble wrapping my head around game design. On the android platform, I have an activity and set its content view with a custom surface view. The custom surface view acts as my panel and I create instances of all classes and do all the drawing and calculation in there. Question: Should I instead create the instances of other classes in my activity? Now I create a custom thread class that handles the game loop. Question: How do I use this one class in all my activities? Or do I have to create a separate instance of the extended thread class each time? In my previous game, I had multiple levels that had to create an instance of the thread class and in the thread class I had to set constructor methods for each separate level and in the loop use a switch statement to check which level it needs to render and update. Sorry if that sounds confusing. I just want to know if the method I am using is inefficient (which it probably is) and how to go about designing it the correct way. I have read many tutorials out there and I am still having lots of trouble with this particular topic. Maybe a link to a some tutorials that explain this? Thanks.

    Read the article

  • Webcast - Set Your Sights on Enterprise 2.0 in the Cloud

    - by [email protected]
    To gain a competitive edge in your market, you need your business processes to be more collaborative, agile, and flexible to meet growing business demands. How can you make that happen? One way is to deploy portal, content management, and Enterprise 2.0 capabilities on a cloud infrastructure. According to top industry analysts, Enterprise 2.0 and cloud computing are two of the top three CIO initiatives in 2010. What are some of the advantages associated with deploying your Enterprise 2.0 initiatives in a cloud environment? Learn about the security, performance, and flexibility benefits that are available to you. Watch our complimentary live Webcast, Cloud Computing and Enterprise 2.0--Gain a Competitive Advantage, to get the answers you're looking for. Find out how Oracle pioneered the highly scalable and highly secure solutions that will enable you to: Quickly deploy on a cloud computing infrastructure that can scale as projects go viral Accelerate business processes, such as new product introduction, customer service, and new employee on-boarding Take advantage of best practices in cloud computing and Enterprise 2.0 implementations Join us for this LIVE webcast tomorrow as we show you how to achieve a higher level of performance and flexibility with Enterprise 2.0 and cloud computing. Register today for the live Webcast.

    Read the article

  • Changing frontend cache

    - by Utsav
    Our architecture consists of a front-end cache that most read only users obtain their data from directly. The front-end cache sits in front of a farm of webservers that serve pages written in PHP. We need to be able to detect certain conditions at the front-end cache level and pass those values through to the back-end via HTTP headers. For example we would like to manually tag the carrier network based on the IP address. So, for incoming traffic if the user is say coming from an IP address in the range of "41.202.192.0"/19 we would tag them as being a Orange Cameroon user by setting the appropriate HTTP request header, e.g., X-Carrier = "Orange Cameroon". Based on the setting of this header we would like to vary the cache and serve a different banner to the end user. How would you go about doing this? Keep in mind that we don't want to pollute the cache and we also don't want to create too many small cache segments. Assumptions: You can assume that the X-Carrier has already been detected in our cache. So, for the purposes of your test you can just set this value manually in your example script.

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • Learning from jQuery - Solid fundament for experienced jQuery developers

    Frankly speaking, I had to sleep a night over before typing this review. And even now it is not an easy, straight-forward task to write this recension. I'm not sure whether I'm the right kind of audience this title is actually addressed to. It clearly states that this book is for web developers which are very familiar with jQuery library but would like to extend their knowledge to vanilla JavaScript. Not being part of this particular group it felt strange to go through the various chapters after all. This title is clearly addressed to experienced jQuery users and developers especially while looking for improvements in performance and better ways of optimisations. Sometimes just to simplify the existing jQuery code in order to avoid the heavy load of the complete jQuery library and sometimes for the better understanding of JavaScript and its syntax. Callum's style of writing is clear and the numerous code samples used to emphasize the various techniques are good ones and easy to understand. Quite interestingly, it put a light smile on my face when I compared his sample code of sending an AJAX request to some code in one of my own blog articles I wrote back in 2006 (in German language). JavaScript is clearly a mature language and certain requirements are simply done this way. And Callum explains the nuts and bolts of JavaScript very well. Personally, I gained most out of this book from chapter 5 - JavaScript Conventions. The paragraphs and code snippets on Optimizations and Common Antipatterns gave me a better understanding on various aspects of JavaScript development, and I definitely have to revise a couple of code fragments I have written in the past. Overall the book provides solid information on JavaScript for jQuery developers and is worth the money spent. Just be sure that you're part of the targeted audience.

    Read the article

  • mdadm: breaks boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Reasons NOT to open source not-for-profit code?

    - by naught101
    I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on SE, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.

    Read the article

  • How to share problem solving knowledge in a multiteam group?

    - by jonathan
    I've been working in multiteam groups for as long as I'm a webdeveloper, for me a team can be a lonely soldier or several people, generally a company will have multiple teams working in different projects and once the project is out in the wild, any team can perform the maintenance. This is a small picture since I'm not talking only about project wise knowledge, but "craft wise" knowledge, but it gives the picture of how I'm used to work, so: Since we work on modularised teams, sometimes I feel like the teams are too tightly enclosed in their projects, I've seen cases where after an hour of discussion, someone asked the question aloud and other person totally unrelated answered in a much simpler fashion. The problem is not so simple to solve as people tend not to be available all the time, also sometimes people can't afford the time to go through a problem with the "asker", but could do it alone. I've thought about software based solutions, something in the lines of SE, but I'd like to know other programmers opinions on the subject. EDIT I don't know if this is a wikipedia complex, but I feel that Wikis don't encourage the user to actually ask questions, but rather to write articles, and sometimes we don't know the knowledge we need, before needing it.

    Read the article

  • Would Using a PHP Framework Be Beneficial in My Context?

    - by Fractal
    I've just started work at a small start-up company who mainly uses PHP to develop their front-end apps. I had no prior PHP experience before joining, and this has led to my apps becoming large pieces of spaghetti code. I essentially started by adding code to implement an initial feature, and then continued to hack in more code to implement further features – without much thought for the overall design. The apps themselves output XML to render on small mobile devices. I recently started looking into frameworks that I could use. I reckon an advantage would be that they seem to force developers to modularise their programs using good-practice design patterns. This seems great for someone in my position. The extra functions they provide, for example: interfacing with databases in such a way as to make SQL injection impossible, would be very useful too. The downside I can see is that there will be a lot of overhead for me in terms of the time taken to learn the framework itself (while still getting to grips with PHP itself). I'm also worried that it will be overkill for the scale of the apps we develop. They tend to be programs that interface with a fairly simple back-end DB, and will generate about 5 different XML screens. Probably around 1 or 2 thousand lines of code. The time it takes just to configure the frameworks may not be worth it. The final problem I can see is that developers in the company – who have to go over my code, and who do not know the PHP framework I may use – will have a much harder time understanding it. Given those pros and cons, I'm still not sure on what the best course of action will be; so any advice will be greatly appreciated.

    Read the article

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • Frequent Disconnects with an Intel 3945ABG Wireless Card

    - by Alex Forsythe
    I'm brand new to Ubuntu, and I really love it so far, but one issue I have encountered is that my WLAN is disconnecting about every 5-10 minutes. Often times the connection is repaired automatically, but sometimes the network manager will repeatedly reject my encryption key (which of course is correct). Occasionally after a disconnect, the wireless network fails to show up at all. The only way I can solve this seems to be by completely restarting Ubuntu or connecting with a USB wireless adapter. I am using WPA/WPA2 encryption, which I've read can cause problems with network-manager, but I experience the exact same issues with WICD. I should probably note that I've not experienced any of these issues using Windows 7 on my other partition. I have a hunch that there may be a better driver out there for my card, but I have no idea how to go about searching for it or installing it. Any help would be really appreciated! lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric lspci -nnk 09:00.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme BCM5752 Gigabit Ethernet PCI Express [14e4:1600] (rev 02) Subsystem: Dell Device [1028:0201] Kernel driver in use: tg3 Kernel modules: tg3 0c:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection [8086:4222] (rev 02) Subsystem: Intel Corporation Device [8086:1020] Kernel driver in use: iwl3945 Kernel modules: iwl3945 rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 3: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no

    Read the article

  • Security Alert For CVE-2010-4476 Released

    - by eric.maurice
    Hello, this is Eric Maurice again. Oracle just released a Security Alert with a fix for the vulnerability CVE-2010-4476, which affects Oracle Java SE and Oracle Java For Business. This vulnerability is present in Java running on servers as well as standalone Java desktop applications. Its successful exploitation by a malicious attacker can result in a complete denial of service for the affected servers. While only recently publicly disclosed, a number of Internet sites have since then reproduced details about this vulnerability, including exploit codes, which may result in allowing a malicious attacker to create a denial of service condition against the targeted system. Oracle therefore strongly recommends that affected organizations apply this fix as soon as possible. Please note that a fix for this vulnerability will also be included in the upcoming Java Critical Patch Update (Java SE and Java for Business Critical Patch Update - February 2011), which will be released on February 15th 2011. Note that the impact of this vulnerability on desktops is minimal: the affected applications or applets running in Internet browsers for example, might stop responding and may need to be restarted; however the desktop itself will not be compromised (i.e. no compromise at the desktop OS level). Oracle therefore recommends that consumers use the Java auto-update mechanism to get this fix. This will prompt them to install the latest version of the Java Runtime Environment 6 update 24 or higher (JRE), which includes the fix for this vulnerability. JRE 6 update 24 will also be distributed with the Java SE and Java for Business Critical Patch Update - February 2011. For More Information: The Critical Patch Updates and Security Alerts page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html The Advisory for Security Alert CVE-2010-4476 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2010-4476-305811.html More information on Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html Consumers can go to http://www.java.com/en/download/installed.jsp to ensure that they have the latest version of Java running on their desktops. More information on Java Update is available at http://www.java.com/en/download/help/java_update.xml

    Read the article

  • My search what the Cloud will mean for my Work

    - by Kay Sellenrode
    Since I finished my MCM Exchange 2007 training back in April 2009 I’m struggling with the Cloud. I know it will change the way we do things today, but how will it affect my work. My work is Exchange consultancy mostly in the Netherlands, but more and more across the globe.   In my job as a consultant I noticed last year that a large percentage of my customers showed interest in the cloud services available today. But in most situations it seemed that it wasn’t the right time for them to switch to a cloud service at this moment. Right now I’m helping one of my customers is exploring Exchange online and it looks like they will switch over from their on-premise Exchange solution. This made me more than ever realize that I need to do something to not miss the boat.     With Office 365 coming this year, my idea is that Cloud services will take off from now. Also I’m sure that quite some customers will expect me to help them with their decision between the cloud and the on premise solution. So in the next months I will explore all the possibilities of Office 365, but also some of the competition in this field.   In my search for what the cloud will mean for me and my customers, I will go over all the aspects of the offered solutions. Any help in my search is always welcome. I’m looking forward to ideas people have around the cloud and how it will change the IT environment, especially in the Unified communications field.   Next week I will post my first article about my experiences with the cloud until now.

    Read the article

  • How To Backup Of MySQL Database Using PhpMyAdmin

    - by Jyoti
    It is very important to do backup of your MySql database, you will probably realize it when it is too late. A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server it is very easy to keep them safe from crashes, you just have a copy of them on your own PC and then upload them again after the web server is restored after the crash. All the content in the MySql database must also be backed up. If you have spent a lot of time making the content and it is only stored in the Mysql server, you will feel very bad if it gets lost for ever. Backing it up once every month or so makes sure you never loose too much of your work in case of a server crash, and it will make you sleep better at night. It is easy and fast, so there is no reason for not doing it. Step 1: Log into phpMyAdmin on your server. Step2: You can select the database that you would like to backup from the drop-down menu called Database. Step 3: A new page will be loaded in phpMyAdmin showing the selected database. In order to proceed with the backup click on the Export tab. Step 4: The options that you should select apart from the default ones are Save as file which will save the file locally to your computer in an .sql format and Add DROP TABLE which will add the drop table functionality if the table already exists in the database backup as shown below. Step 5: Click on the Go button to start the export/backup procedure for your database. A download window will pop up prompting for the exact place where you would like to save the file on your local computer. It is possible that the download starts automatically. This depends on your browser’s settings.

    Read the article

  • Can not mount /dev/loop0 (/cdrom/casper/filesystem.squashfs)

    - by simpleton
    I downloaded 12.04.1 and md5 sum checked them and everything is good. Made a live usb and booted up... Just gets to where its about to start with the purple ubuntu loading screen then it goes back to text and gives this message: BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) mount: mounting /dev/loop0 on ///filesystem.squashfs failed: Input/output error Can not mount /dev/loop0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs This happens in the live option, or persistent even the file checking one doesn't work Also I've tried a few different F6 options to no avail. I used 'LiLi USB creator' and 'unetbootin' and also 'Universal USB Installer,' all with the same results. I've also tried using a VM and it showed the same. That is when I figured I had a corrupt .iso so I downloaded it again checked the MD5: e235b63c02644e219b7bf3668f479c9e. Only I'm having the same problem. I'm just about ready to give up on 12.04.1 and just go back to 10.04 utill the next LTS comes out. I've got a dell mini 10 btw. Thanks for your time.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • How to practice object oriented programming?

    - by user1620696
    I've always programmed in procedural languages and currently I'm moving towards object orientation. The main problem I've faced is that I can't see a way to practice object orientation in an effective way. I'll explain my point. When I've learned PHP and C it was pretty easy to practice: it was just matter of choosing something and thinking about an algorithm for that thing. In PHP for example, it was matter os sitting down and thinking: "well, just to practice, let me build one application with an administration area where people can add products". This was pretty easy, it was matter of thinking of an algorithm to register some user, to login the user, and to add the products. Combining these with PHP features, it was a good way to practice. Now, in object orientation we have lots of additional things. It's not just a matter of thinking about an algorithm, but analysing requirements deeper, writing use cases, figuring out class diagrams, properties and methods, setting up dependency injection and lots of things. The main point is that in the way I've been learning object orientation it seems that a good design is crucial, while in procedural languages one vague idea was enough. I'm not saying that in procedural languages we can write good software without design, just that for sake of practicing it is feasible, while in object orientation it seems not feasible to go without a good design, even for practicing. This seems to be a problem, because if each time I'm going to practice I need to figure out tons of requirements, use cases and so on, it seems to become not a good way to become better at object orientation, because this requires me to have one whole idea for an app everytime I'm going to practice. Because of that, what's a good way to practice object orientation?

    Read the article

  • Google Top Geek E04

    Google Top Geek E04 In Spanish! Google Top Geek is a weekly show from Google Mexico. This week: 1. Esto es Google, el evento más grande e importante de Google en México, en su segunda edición, se llevó a cabo los días 13 y 14 de noviembre de 2012. Fue un gran evento dirigido a todo el ecosistema en México: desarrolladores, usuarios y negocios. Cerca de 3000 asistentes nos honraron con su presencia en Esto es Google a lo largo de dos intensos días, llenos de conferencias, paneles y espacios para conocer y acercarse a tecnología y startups. Mencionamos durante este segmento, ligas para aprender más de la importancia del mercado de móviles en México y el mundo: Go Mobile, para pasar tu sitio actual a una versión para móviles. The Mobile Playbook, con mucha información para tomar las mejores decisiones con respecto a móviles y tecnologías modernas. 2. De concursos de programación, de negocios hasta internships y trabajo de tiempo completo, Google ofrece una amplia gama de oportunidades en todo el mundo. por ejemplo, está por iniciar el concurso Google Code-in 2012, para chavos de preparatoria, con un formato similar al de Google summer of code, con 10 organizaciones de código abierto como mentoras. 3. Lanzamientos de la semana, el primero interesante para Gmail: búsquedas por tamaño, utilizando size:5m, larger: .., fechas flexibles, etc. En Google Drive ya puedes buscar por persona, no sólo los que han compartido contigo; sino los que involucran a una misma persona. Búsquedas de la semana Las <b>...</b> From: GoogleDevelopers Views: 15 2 ratings Time: 15:50 More in Science & Technology

    Read the article

  • Android threads trouble wrapping my head around design

    - by semajhan
    I am having trouble wrapping my head around game design. On the android platform, I have an activity and set its content view with a custom surface view. The custom surface view acts as my panel and I create instances of all classes and do all the drawing and calculation in there. Question: Should I instead create the instances of other classes in my activity? Now I create a custom thread class that handles the game loop. Question: How do I use this one class in all my activities? Or do I have to create a separate thread each time? In my previous game, I had multiple levels that had to create an instance of the thread class and in the thread class I had to set constructor methods for each separate level and in the loop use a switch statement to check which level it needs to render and update. Sorry if that sounds confusing. I just want to know if the method I am using is inefficient (which it probably is) and how to go about designing it the correct way. I have read many tutorials out there and I am still having lots of trouble with this particular topic. Maybe a link to a some tutorials that explain this? Thanks.

    Read the article

  • Logarithmic spacing of FFT bins

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • Star Trek inspired home automation visualisation

    - by Zak McKracken
    I’ve always been a more or less active fan of Star Trek. During the construction phase of my house I started coding a GUI for controlling the house which has an EIB. Just for fun I designed a version inspired by the LCARS design used in Star Trek TNG and showed this to my wife. I showed her several designs before but this was the only one, she really liked. So I decided to go on with this. I started a C# WinForms application. The software runs on a wall mounted Shuttle Barebone-PC. First plan was an industrial panel-pc but the processor was too slow. The now-used Atom is ok. I started with the LCARS-controls found on Codeproject. Since the classic LCARS design divides the screen into two parts this tended to be impracticable, so I used my own design For now the software is able to: Switch lights/wall outlets Show current temperatures for all room controllers Show outside temperature with a 24h trend chart Show the status of the two heat pumps Provide an alarm clock (e.g. for cooking) Play internet radio streams Control absence Mute the door bell Speak status messages via speech synthesis For now, I’m working on an integration of my electric meter. The main heat pump and the electric meter are connected to my LAN. I also tried some speech recognition, but I’ve problems with the microphone. I't’s working when you are right in front of the PC, but not far away, let’s say on the other side of the room. So this is the main view. The table displays raw values which are sent over the EIB – completely useless but looks great For each floor I have a different view. Here you can see the temperatures and check the status of the lights (the buttons are blinking when a light is switched on) This is the view for the heat pump:   Next step would be to integrate a control of my squeezebox server (I use different Squeezeboxes through the house as a multiroom audio solution)

    Read the article

  • SharePoint Content Type Cheat Sheet

    - by Bil Simser
    PrincipleAny application or solution built in SharePoint must use a custom content type over adding columns to lists. The only exception to this is one-off solutions that have no life-cycle, proof-of-concepts, etc.Creating Content TypesWeb UI. Not portable, POC onlyC# or Declarative (XML). Must deploy these as FeaturesRuleDo not chagne the base XML for a Content Type after deploying. The only exception to this rule is that you can re-deploy a modified Content Type definition only after completely removing it from the environment (either programatically or by hand).Updating Content TypesUpdate and push down to child typesWeb UI. Manual for each environment. Document steps required for repeatability.Feature Upgrade. Preferred solution.C#. If you created the content type through code you might want to go this route. Create new modified Content Types and hide the old one. Not recommended but useful for legacy.ReferencesCreate Custom Content  Types in SharePoint 2010 (C#)Content Type Definitions  (XML)Creating Content Types (XML  and C#)Updating ApproachesUpdating Child Content TypesAgree or disagree?

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >