Search Results

Search found 263 results on 11 pages for 'archiving'.

Page 10/11 | < Previous Page | 6 7 8 9 10 11  | Next Page >

  • Tweeting about Oracle Applications Usability: Points to Consider

    - by ultan o'broin
    Here are a few pointers to anyone interested in tweeting about Oracle Applications usability or user experience (UX). These are based on my own experiences and practice, and may not necessarily reflect the views of Oracle, of course (touché, see the footer). If you are an Oracle employee and tweet about our offerings, then read up and follow the corporate social media policy. For the record, I tweet under the following account names: @ultan, @localization, @gamifyOracle, and @usableapps. The last two are supposedly Oracle subject-dedicated, but I mix it up on occassion. Fill out your Twitter account profile, and add a profile picture too. Disclose your interest. Don’t leave either the profile or image blank if you want to be taken seriously (or followed by me). Don’t tweet from a locked down Twitter account, as the message cannot be circulated to anyone who doesn't follow you. Open up the account if you really want to get that UX message out. Stay on message. The usable apps website, Misha Vaughan's VoX blog, and the Oracle Applications blog are good sources of UX messages and information, but you can find many other product team, individual, and corporate-wide sources with a little bit of searching. Set up a Google Alert with pertinent related keywords to get a daily digest of new information right in your inbox. Be original about it. Add your own insight and wit to the message, were relevant. Just circulating and RTing stock headlines adds no value to your effort or to the reader, and is somewhat lazy, in my opinion. Leave room for RTing of your tweet. So, don’t max out those 140 characters. Keep it under 130 if you want to be RTed without modification (or at all-I am not a fan of modifying tweets [MT], way too much effort for the medium). Remove articles and punctuation marks and use fragments, abbreviations, and so on at will to keep the tweet short enough, but leave keywords intact, as people search on those. Follow any Fusion UX Advocates who are on Twitter too (you can search for these names), and not just Oracle employees. Don't just follow people you like or think like you, or those who you think like you or are like-minded. Take a look at who is following or being followed by other tweeters and er, follow up. Create and socialize others to use an easily remembered or typed hashtag, or use what’s already popularized (for an event or conference, for example). We used #gamifyOracle for the applications UX gamification design jam, and other popular applications UX ones are #fusionapps and #usableapps (or at least I’m trying to popularize it). But, before you start the messaging, if you want to keep a record of the hashtag traffic, then set it up with an archiving service. Twitter’s own tweet lifespan is short. Don't mix up hashtags (#) with Twitter handles (@) that have the same name. Sending a tweet to @gamifyOracle will just be seen by @gamifyOracle (me) and any followers we have in common. Sending it to #gamifyOracle is seen by anyone following or searching for that hashtag. No dissing the competition. But there is no rule about not following them on Twitter to see the market reactions to Oracle announcements and this can even let you can tailor your own message accordingly. Don’t be boring. Mix it up a bit. Every 10th or so tweet, divert into other areas of interest, personal ones, even. No constant “I just received K+ in this and that” or “I just checked into wherever” on foursquare pouring into the Twittersteam, please. I just don’t care and will probably unfollow such people pretty quickly. And now, your Twitter tips and experiences with this subject? Them go in the comments...

    Read the article

  • ??Data Guard???????Redo GAP

    - by JaneZhang(???)
      ?Oracle Data Guard?,Redo Gap??????????????????redo????????????,?????????redo??????????,?????????????:ARC:????MRP:Media Recovery Process,????????redoRFS:Remote File Server ,???????????redo??FAL:Fetch Archive Log????:?????????gap?,??????????gap?????:Oracle 11.2.0.2 on Linux 5.????:1.?????????????:Primary:MAX(SEQUENCE#)--------------           86Standby:MAX(SEQUENCE#)--------------           862. ??????,??gap:????????: #ifconfig eth0 down???????switch logfile:SQL>alter system switch logfile;SQL>alter system switch logfile;...Primary:MAX(SEQUENCE#)--------------           96????alert log?????????????:TNS-00513: Destination host unreachable   nt secondary err code: 101   nt OS err code: 0Error 12543 received logging on to the standbyFAL[server, ARCp]: Error 12543 creating remote archivelog file 'STANDBY'FAL[server, ARCp]: FAL archive failed, see trace file.ARCH: FAL archive failed. Archiver continuingORACLE Instance orcl - Archival Error. Archiver continuing.3.??????????????,????????????:mv *.arc ../4. ???????:#ifconfig eth0 up5.??,???ARC???????????????????MRP???gap??gap fetching.??alert log:Thu Mar 29 19:58:49 2012Media Recovery Waiting for thread 1 sequence 87 (in transit) <====  ?????,??87...Thu Mar 29 20:08:45 2012...Media Recovery Waiting for thread 1 sequence 94Thu Mar 29 20:11:01 2012RFS[61]: Assigned to RFS process 13643RFS[61]: Opened log for thread 1 sequence 97 dbid 1285401128 branch 757620395Archived Log entry 80 added for thread 1 sequence 97 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:11:02 2012RFS[62]: Assigned to RFS process 13645RFS[62]: Selected log 4 for thread 1 sequence 98 dbid 1285401128 branch 757620395Thu Mar 29 20:11:02 2012Primary database is in MAXIMUM PERFORMANCE modeRe-archiving standby log 4 thread 1 sequence 98Thu Mar 29 20:11:02 2012Archived Log entry 81 added for thread 1 sequence 98 ID 0x4c9d8928 dest 1:RFS[63]: Assigned to RFS process 13647RFS[63]: Selected log 4 for thread 1 sequence 99 dbid 1285401128 branch 757620395Thu Mar 29 20:11:05 2012Fetching gap sequence in thread 1, gap sequence 94-96 <===========?gap...6.??MRP?trace,?????MRP ??fetching gap:MRP trace:*** 2012-03-29 20:08:45.375 4265 krsh.cMedia Recovery Waiting for thread 1 sequence 94*** 2012-03-29 20:11:05.543*** 2012-03-29 20:11:05.543 4265 krsh.cFetching gap sequence in thread 1, gap sequence 94-96 <==========MRP?gap.Redo shipping client performing standby login*** 2012-03-29 20:11:05.593 4595 krsu.cLogged on to standby successfullyClient logon and security negotiation successful!7.????????????,???RFS????????, MRP ????????apply.Thu Mar 29 20:12:06 2012RFS[64]: Assigned to RFS process 13649RFS[64]: Opened log for thread 1 sequence 94 dbid 1285401128 branch 757620395Archived Log entry 82 added for thread 1 sequence 94 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:12:06 2012RFS[65]: Assigned to RFS process 13651RFS[65]: Opened log for thread 1 sequence 95 dbid 1285401128 branch 757620395Thu Mar 29 20:12:06 2012RFS[66]: Assigned to RFS process 13653RFS[66]: Opened log for thread 1 sequence 96 dbid 1285401128 branch 757620395Archived Log entry 83 added for thread 1 sequence 95 rlc 757620395 ID 0x4c9d8928 dest 2:Archived Log entry 84 added for thread 1 sequence 96 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:12:16 2012Media Recovery Log /home/oracle/arch1/standby/1_94_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_95_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_96_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_97_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_98_757620395.arc????:????????,????gap???,???ARC?????????gap??,????????????MRP???apply log??????gap,???????FAL????? ?:?11g,??????ARC??????RFS?MRP?????????????gap. 8. ????????MRP??FAL??gap??,????????????,??MRP?trace???:FAL[client, MRP0],?????FAL??? *** 2012-03-29 21:18:15.964 4265 krsh.cError 1031 received logging on to the standby*** 2012-03-29 21:18:15.964 4265 krsh.cFAL[client, MRP0]: Error 1031 connecting to PRIMARY for fetching gap sequence

    Read the article

  • Changing the Android emulator locale automatically

    - by Christopher
    For automated testing (using Hudson) I have a script that generates a bunch of emulators for many combinations of Android OS version, screen resolution, screen density and language. This works fine, except for the language part. I need to find a way to change the Android system locale automatically. Here's some approaches I can think of, in order of preference: Extracting/editing/repacking a QEMU image directly before starting the emulator Running some sort of system-locale-changing APK on the emulator after startup Changing the locale settings on the emulator filesystem after startup Changing the locale settings in some SQLite DB on the emulator after startup Running a key sequence (via the emulator's telnet interface) that would open the settings app and change the locale Manually starting the emulator for each platform version, changing the locale by hand in the settings, saving it and archiving the images for later deployment Any ideas whether this can be done, either via the above methods or otherwise? Do you know where locale settings are persisted to/read from by the system? Solution: Thanks to dtmilano's info about the relevant properties, and some further investigation on my part, I came up with a solution even better and simpler simpler than all the ideas above! I have updated the answer below with the details.

    Read the article

  • Embeddable forum software

    - by Rented
    I am in the planning stages of a specific subject matter community web site, and one feature I feel is required is that of member discussions. However, not in a typical forum style. For example, I don't want the members to have to navigate away from their own "user space" in order to discuss a topic. I think it is best described with an analogous example. Lets say the site is for literature buffs, and each member has a set of pages for keeping notes, progress, questions, etc. on books they are studying/reading. So Joe will have one page for Great Expectations, another for Hamlet, a third for I, Robot, and so forth. Likewise, Jane will have a page for Don Quixote, Lord of the Flies, and also I, Robot. Now, wouldn't it be nice if Joe and Jane could discuss I, Robot from within their own respective pages? Now, at first thought, roll your own seems like the way to go. However, once we start getting into issues such as spam blocking, banning, ratings, pruning, archiving, flooding and so on, well "roll your own" doesn't sound too appealing anymore. Also, I have next to zero experience with forum software. So I'm looking for forum software that has an extensive API or is generally very integration-friendly. I would like to be able to create user groups, topics, permissions, etc. programmatically,as well as the obvious user authentication (most seem open in that respect). The site will most probably be built with Java. Tangler seems like a descent option, but it seems less mature than what I'd prefer.

    Read the article

  • Adding User License Agreement in Solaris package

    - by Adil
    I have asked similar question for Linux RPM (http://stackoverflow.com/questions/2132828/adding-license-agreement-in-rpm-package). Now i have same query for Solaris package. I could not get any helpful link / details if it is possible. But I have found a package which does exactly the same thing but how it has been implemented, its not mentioned. $pkgadd -d . SUNWsamfsr SUNWsamfsu Processing package instance from Sun SAM and Sun SAM-QFS software Solaris 10 (root)(i386) 4.6.5,REV=5.10.2007.03.12 Sun SAMFS - Storage & Archiving Management File System Copyright (c) 2007 Sun Microsystems, Inc. All Rights Reserved. ----------------------------------------------------- In order to install SUNWsamfsr, you must accept the terms of the Sun License Agreement. Enter "y" if you do, "n" if you don't, or "v" to view agreement. y -The administrator commands will be executable by root only (group bin). If this is the desired value, enter "y". If you want to change the specified value enter "c". y ... ... Any ideas how to implement such thing for Solaris package?

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco CMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco DMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: 1) What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? 2) Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • JMX - MBean automated registration on application deployment

    - by Gadi
    Hi All, I need some direction with JMX and J2EE. I am aware (after few weeks of research) that the JMX specification is missing as far as deployment is concerned. There are few vendor specific implementations for what I am looking for but none are cross vendor. I would like to automate the deployment of MBeans and registration with the Server. I need the server to load and register my MBeand when the application is deployed and remove when the application is un-deployed. I develop with: NetBean 6.7.1, GlassFish 2.1, J2EE5, EJB3 More specific, I need a way to manage timer service runs. My application need to run different archiving agents and batch reporting. I was hoping the JMX will give me remote access to create and manage the timer services and enable the user to create his own schedule. If the JMX is auto registered on application deployment the user can immediately connect and manage the schedule. On the other hand, how can an EJB connect/access an MBean? Many thanks in advance. Gadi.

    Read the article

  • What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

    - by Piotr Owsiak
    I have a DB that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question http://stackoverflow.com/questions/514827/microsoft-sql-server-2005-2008-xml-vs-text-varchar-data-type and it gave me some ideas but I am particulairly interrested in clarification on whether the DB size increases or decreases. Can you please share your insight/experiences in that matter. BTW. I don't currently have any need to depend on XML features within SQL Server (there's nearly zero advantage to me in the specific case). Ocasionally log entries will be extracted, but I prefer to handle the XML using .NET (either by writing a small client or using a function defined in a .NET assembly).

    Read the article

  • Objective-C / Cocoa: Uploading Images, Working Memory, And Storage.

    - by Finley Still
    Hello. I'm in the process of porting an application originally in java to cocoa, but I'm rewriting it to make it much better, since I prefer cocoa a lot anyway. One of the problems I had in the application, was that when you uploaded images to it, I had the images created, (as say an NSImage object) and then I just had them sitting in memory, the more I uploaded the more memory they took up, and I ended up running out of memory. My question is this: if I am going to have users upload images to this application in cocoa, how should I go about storing them? I don't just want to copy the file paths, because I want what is saved to contain the images, etc. Is there any way to upload an image and copy it into a different place only for my application? Then load that image with the new path name as needed? Only I would like it all to be consolidated. I'm going to implement saving by archiving one "master" object into an NSData*- so I'd like the images to be saved with that. Is there a temporary location maybe where I could write the images to disk for my application, and then when I saved, they would all be archived into a single file? Also, how do I do this? Thanks.

    Read the article

  • How to remove a specific category on a selected mail in Outlook 2003 with Macro?

    - by szekelya
    Hi, I am trying to transform my Outlook2003 into the closest thing to gmail. I started to use categories, which are pretty similar to labels in gmail. I can assign categories automatically with rules, and I can add categories manually. I have also created "search folders", that show all mails with a given category, if they are not in the Deleted Items or Sent Items folders. This part is almost like the Label views in gmail. Two things are missing basically, which should be done with macros (VBA to be precise) which I'm totally inexperienced with. So hence my questions: -Can someone show me a macro to remove the category "Inbox"? That would act exactly like the Archive button in gmail. In fact I want to assign this macro to a toolbar button and call it Archive. I have a rule that adds the Inbox category to all incoming mail. As I said, I have a search folder displaying all mails categorized as Inbox, and I also have an All Mail search folder, that displays all messages regardless whether they have the Inbox category. Exactly like gmail, just the easy archiving is missing. -Can someone show me a macro that would delete the selected mail/mails and also would remove the Inbox category before deletion? I would replace the default delete button with this macro. (Somewhat less important, as in my search folders I can filter messages that are physically placed in the Deleted Items folder, but it would be more elegant not to have mails categorized as Inbox in the trash. Many thanks in advance, szekelya

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • implementing a Intelligent File Transfer Software in java over TCP/IP

    - by whyjava
    Hello I am working on a proposal where we have to implement a software which can move files between one source to destination.The overall goal of this project is to create intelligent file transfer.This software will have three components :- 1) Broker : Broker is the module that communicates with other brokers, monitors files, moves files, retrieves configurations from the Configuration Manager, supplies process information for the monitor, archives files, writes all process data to log files and escalates issues if necessary 2) Configuration Manager :Configuration Manager is a web-based application used to configure and deploy the configuration to all brokers. 3) Monitor : Monitor is a web-based application used to monitor each Broker in the environment. This project has to be built up in java and protocol for file transfer in tcp/ip. Client does not want to use FTP. File Transfer seems very easy, until there are several processes who are waiting to pick the file up automatically. Several problems arise: How can we guarantee the file is received at the destination? If a file isn’t received the first time, we should try it again (even after a restart or power breakdown) ? How does the receiver knows the file that is received is complete? How can we transfer multiple files synchronously? How can we protect the bandwidth, so file transfer isn’t blocking other processes? How does one interoperate between multiple OS platforms? What about authentication? How can we monitor het workflow? Auditing / logging Archiving Can you please provide answer to some of these? Thanks

    Read the article

  • iPhone: Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • My iOS app has a + in its name. Bundle is invalid due to this. Need help resolving

    - by d.altman
    I did find a couple of very similar or identical threads here but they seemed to end before full resolution. My app runs fine on my device with no build error. I am trying to submit app for approval and I get the following error, "This bundle is invalid. The executable name, as reported by CFBundleExecutable in the info.plist file may not contain any of these characters ..... +". So I opened my info.plist file and changed the info.plist file executable name from the macro ${EXECUTABLE_NAME} to the name of my app without the +. I did a new archive but then get an error saying the "codesign failed with exit code 1". In another thread I read to just change targets name removing the + from there and leaving the info.plist file with the macro for the executable name, restarting Xcode and then archiving again. That allowed me to archive but I received the same error in iTunes Connect. I have been working on this all day and don't find the solution. Can anyone please point me in the right direction? Thank you for any help.

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Establishing a web page bookmarking process - looking for ideas to improve

    - by Matt
    Like many others, I have a process for bookmarking web pages to read later. My requirements for web page bookmarking are: Ability to bookmark pages must be available from all (within reason) platforms - PC/browser, mobile device, etc. Bookmarks must be centrally stored (implicit from #2) so that I can read the bookmarks from anywhere/any device Full text of web pages must be stored Bonus features would be: Bookmarks and page content should be full text searchable Maintain an archive indefinitely Distinguish between what's read vs. unread Bookmarked page content is cleaned up, e.g. ads eliminated, unnecessary html removed, pages better formatted for reading My current process (which addresses most of these requirements) is as follows: I set up a Gmail account with 2 labels, "Bookmarks Unread" and "Bookmarks Read" Gmail filters set up such that depending on the form of the address (using Gmail's '+string' functionality in addresses), the incoming bookmark gets labeled appropriately On each of my browsers/devices, I have an address book entry for [email protected] and [email protected]. If I want to clean up the page content, I use the Readability bookmarklet which does a great job of giving me the essential content only Anywhere I have Firefox, I use the Send Page by Email extension which, with 2 clicks, allows me to send the cleaned-up Readability page URL and content to one of the above email addresses. Where I don't have Firefox (e.g. iPhone or other mobile device) I use the native ability to send the current link via email (most/all apps have them, including the browser, RSS readers, NYTimes, etc.). In most cases (unless it's built into the particular app), this won't include the page body. The process is almost perfect. I've got the central access and ubiquitous access of Gmail as the storage mechanism, full text searchability (due to Gmail, but of course only for the URLs I send from that Firefox extension), a cleaned up page due to Readability, ability to read offline (assuming I use an IMAP client against Gmail) and permanent archiving of content, including what's been read vs. unread. The missing pieces are: The Send Page by Email Firefox extension seems to only send X bytes of a web page. Or some portion. So it limits my full text searchability. Where I don't have Firefox, I can only send the link, so no full text search at all in those cases. Instapaper looks like it meets most of my requirements (and bonus items). The only downside to me (personal preference) is that central storage is based on Instapaper vs. something more broad like Gmail, which as a generalized service and with Google behind it pretty much means it's permanent. I'm not too hung up on this, but I would definitely prefer to keep Gmail if possible. An upside of Instapaper is that it does the page clean-up as well as stores the entire page content, unlike my Firefox extension. Thoughts on addressing the gaps and improving this process further?

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • Second Day of Data Integration Track at OpenWorld 2012

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Our second day at OpenWorld and the Data Integration Team was very active with customer meetings, product updates, product demonstrations, sessions, plus much more.  If the volume of traffic by our demo pods is any indicator, this is a record year for attendance at OpenWorld.  The DIS team have had tremendous number of people stop by our demo pods to learn about the latest product releases or to speak to one of our product managers.    For Oracle GoldenGate, there has been a great deal of interest in Integrated Capture and the  Oracle GoldenGate Monitor plug-in for Enterprise Manager.  Our customer panels this year have been very well attended and on Tuesday we held the “Real World Operational Reporting with Oracle GoldenGate Customer Panel”. On this panel this year we had Michael Wells from Raymond James, Joy Mathew and Venki Govindarajan from Comcast, and Serkan Karatas from Turk Telekom. Our panelists have a great mix of experiences and all are passionate about using Oracle Data Integration products to solve very complex use cases. Each panelist was given a ten minute to overview their use of our product, followed by a barrage of questions from the audience. Michael Wells spoke about using Oracle GoldenGate for heterogeneous real time replication from HP (Tandem) NonStop to SQL Server and emphasized the need for using standard naming conventions for when customers configure GoldenGate, as the practices is immensely helpful when debugging a problem. Joy Mathew and Venkat Govindarajan from Comcast described how they have used GoldenGate for over a decade and their experiences of using the product for replicating data from HP nonstop to Terdata. Serkan Karatas from Turk Telekom dove into using Oracle GoldenGate and the value of archiving data in extremely large databases, which in Turk Telekoms case resulted in a 1 month ROI for the entire project. Thanks again to our panelist and audience participants for making the session interactive and informative.  For Wednesday we have a number of sessions available to attendees plus two hands-on labs, which I have listed below.   If you are unable to attend our hands-on lab for Oracle GoldenGate Veridata, it is available online at youtube.com. Sessions  11:45 AM - 12:45 PM Best Practices for High Availability with Oracle GoldenGate on Oracle Exadata -Moscone South - 102 1:15 PM - 2:15 PM Customer Perspectives: Oracle Data Integrator -Marriott Marquis - Golden Gate C3 Oracle GoldenGate Case Study: Real-Time Operational Reporting Deployment at Oracle -Moscone West - 2003 Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform -Moscone West - 3000 3:30 PM - 4:30 PM Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active -Moscone West - 3000 5:00 PM - 6:00 PM Tuning and Troubleshooting Oracle GoldenGate on Oracle Database -Moscone South - 102 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Hands-on Labs 10:15 AM - 11:15 AM Introduction to Oracle GoldenGate Veridata Marriott Marquis - Salon 1/2 11:45 AM - 12:45 PM Oracle Data Integrator and Oracle SOA Suite: Hands-on Lab -Marriott Marquis - Salon 1/2 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On Document.

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • Educause Top-Ten IT Issues - the most change in a decade or more

    - by user739873
    The Education IT Issue Panel has released the 2012 top-ten issues facing higher education IT leadership, and instead of the customary reshuffling of the same deck, the issues reflect much of the tumult and dynamism facing higher education generally.  I find it interesting (and encouraging) that at the top of this year's list is "Updating IT Professionals' Skills and Roles to Accommodate Emerging Technologies and Changing IT Management and Service Delivery Models."  This reflects, in my view, the realization that higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia. What follows in the remaining 9 top issues all speak, in some form or fashion, to the need for dramatic change, but not just in the areas of "funding IT" (code for cost containment or reduction), but rather the need to increase effectiveness and efficiency of the institution through the use of technology—leveraging the wave of BYOD (Bring Your Own Device) to the institution's advantage, rather than viewing it as a threat and a problem to be contained. Although it's #10 of 10, IT Governance (and establishment and implementation of the governance model throughout the institution) is key to effectively acting upon many of the preceding issues in this year's list.  In the majority of cases, technology exists to meet the needs and requirements to effectively address many of the challenges outlined in top-ten issues list. Which brings me to my next point. Although I try not to sound too much like an Oracle commercial in these (all too infrequent) blog posts, I can't help but point out how much confluence there is between several of the top issues this year and what my colleagues and I have been evangelizing for some time. Starting from the bottom of the list up: 1) I'm gratified that research and the IT challenges it presents has made the cut.  Big Data (or Large Data as it's phased in the report) is rapidly going to overwhelm much of what exists today even at our most prepared and well-equipped research universities.  Combine large data with the significantly more stringent requirements around data preservation, archiving, sharing, curation, etc. coming from granting agencies like NSF, and you have the brewing storm that could result in a lot of "one-off" solutions to a problem that could very well be addressed collectively and "at scale."   2) Transformative effects of IT – while I see more and more examples of this, there is still much more that can be achieved. My experience tells me that culture (as the report indicates or at least poses the question) gets in the way more than technology not being up to task.  We spend too much time on "context" and not "core," and get lost in the weeds on the journey to truly transforming the institution with technology. 3) Analytics as a key element in improving various institutional outcomes.  In our work around Student Success, we see predictive "academic" analytics as essential to getting in front of the Student Success issue, regardless of how an institution or collections of institutions defines success.  Analytics must be part of the fabric of the key academic enterprise applications, not a bolt-on.  We will spend a significant amount of time on this topic during our semi-annual Education Industry Strategy Council meeting in Washington, D.C. later this month. 4) Cloud strategy for the broad range of applications in the academic enterprise.  Some of the recent work by Casey Green at the Campus Computing Survey would seem to indicate that there is movement in this area but mostly in what has been termed "below the campus" application areas such as collaboration tools, recruiting, and alumni relations.  It's time to get serious about sourcing elements of mature applications like student information systems, HR, Finance, etc. leveraging a model other than traditional on-campus custom. I've only selected a few areas of the list to highlight, but the unifying theme here (and this is where I run the risk of sounding like an Oracle commercial) is that these lofty goals cry out for partners that can bring economies of scale to bear on the problems married with a deep understanding of the nuances unique to higher education.  In a recent piece in Educause Review on Student Information Systems, the author points out that "best of breed is back". Unfortunately I am compelled to point out that best of breed is a large part of the reason we have made as little progress as we have as an industry in advancing some of the causes outlined above.  Don't confuse "integrated" and "full stack" for vendor lock-in.  The best-of-breed market forces that Ron points to ensure that solutions have to be "integratable" or they don't survive in the marketplace. However, by leveraging the efficiencies afforded by adopting solutions that are pre-integrated (and possibly metered out as a service) allows us to shed unnecessary costs – as difficult as these decisions are to make and to drive throughout the organization. Cole

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • initWithCoder works, but init seems to be overwriting my objects properties?

    - by Zigrivers
    Hi guys, I've been trying to teach myself how to use the Archiving/Unarchiving methods of NSCoder, but I'm stumped. I have a Singleton class that I have defined with 8 NSInteger properties. I am trying to save this object to disk and then load from disk as needed. I've got the save part down and I have the load part down as well (according to NSLogs), but after my "initWithCoder:" method loads the object's properties appropriately, the "init" method runs and resets my object's properties back to zero. I'm probably missing something basic here, but would appreciate any help! My class methods for the Singleton class: + (Actor *)shareActorState { static Actor *actorState; @synchronized(self) { if (!actorState) { actorState = [[Actor alloc] init]; } } return actorState; } -(id)init { if (self = [super init]) { NSLog(@"New Init for Actor started...\nStrength: %d", self.strength); } return self; } -(id)initWithCoder:(NSCoder *)coder { if (self = [super init]) { strength = [coder decodeIntegerForKey:@"strength"]; dexterity = [coder decodeIntegerForKey:@"dexterity"]; stamina = [coder decodeIntegerForKey:@"stamina"]; will = [coder decodeIntegerForKey:@"will"]; intelligence = [coder decodeIntegerForKey:@"intelligence"]; agility = [coder decodeIntegerForKey:@"agility"]; aura = [coder decodeIntegerForKey:@"aura"]; eyesight = [coder decodeIntegerForKey:@"eyesight"]; NSLog(@"InitWithCoder executed....\nStrength: %d\nDexterity: %d", self.strength, self.dexterity); [self retain]; } return self; } -(void) encodeWithCoder:(NSCoder *)encoder { [encoder encodeInteger:strength forKey:@"strength"]; [encoder encodeInteger:dexterity forKey:@"dexterity"]; [encoder encodeInteger:stamina forKey:@"stamina"]; [encoder encodeInteger:will forKey:@"will"]; [encoder encodeInteger:intelligence forKey:@"intelligence"]; [encoder encodeInteger:agility forKey:@"agility"]; [encoder encodeInteger:aura forKey:@"aura"]; [encoder encodeInteger:eyesight forKey:@"eyesight"]; NSLog(@"encodeWithCoder executed...."); } -(void)dealloc { //My dealloc stuff goes here [super dealloc]; } I'm a noob when it comes to this stuff and have been trying to teach myself for the last month, so forgive anything obvious. Thanks for the help!

    Read the article

< Previous Page | 6 7 8 9 10 11  | Next Page >