Search Results

Search found 28092 results on 1124 pages for 'generated content'.

Page 220/1124 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • RTFMobile

    - by ultan o'broin
    It may seem obvious but it’s worth stating again. The idea that mobile users are going to read lots of user assistance on their devices is just wrong. So, Jakob Nielsen’s post Mobile Content Is Twice as Difficult serves as a timely reminder for anyone thinking of putting manuals as a form of user assistance onto mobile phones. There is also an excellent post on UXMag.com, explaining that one of the ways to screw up with your iPhone app is to throw an old-style user manual into the user experience: 10 Surefire Ways to Screw Up Your iPhone App.   (Image copyright and referenced from UX Magazine 2010)   Instead, user assistance  alternatives—if any at all—include one-time tours, graphics, in-context instructions, and so on. Not so sure that importing “humor” and “personality” work so well in the enterprise app space, myself. However, the message is clear: iPhone users don’t read manuals. Great message. Users will figure it out, and if they can’t, well then your app’s UX is a problem and the app will fail. Shame some teams are obsessed with figuring out ways to port existing manuals to mobile platforms without any thought for the UX. Razorfish’s Scatter/Gather blog says it all: One thing that is particularly discouraging, most material currently available on “Creating Content for the iPad” or similar themes turns out to be about getting traditional content onto, or into, the iPad. Now, manuals for non-end users in PDF format on eReaders is a different matter. I have research on that, but it’s for another post. Technorati Tags: mobile,user assistance,UX,user experience,manuals,documentation

    Read the article

  • problem using pydoc in python

    - by rohanag
    I'm using pydoc in python 2.7.3 to generate documentation for a file called PreProcessingAPI.py which contains a class called PreProcessingAPI In PreProcessingAPI.py, I have the following import in the beginning of the file: from __future__ import division from re import * from nltk.stem import porter The problem is, in the documentation generated by pydoc, nltk.stem.porter is shown as a Module. There is also a DATA heading with all sorts of variables I do not know about. Is there a way to avoid these variables and avoid showing nltk.stem.porter in the modules? I'm running the following command to generate documentation python pydoc.py -w PreProcessingAPI.py I've put the file pydoc.py in the directory containing my file. Here is the file generated: https://www.dropbox.com/s/4rb6ut99o25mwly/PreProcessingAPI.html

    Read the article

  • Set Up Of Common Name Of SSL Certificate To Protect Plesk Panel

    - by Cbomb
    A PCI Compliance scanner is balking that the self signed SSL certificate protecting secure access to Plesk Panel contains a name mismatch between the location of the Plesk Panel and the name on the certificate, namely the self-signed cert's name is "Parallels" and the domain to reach Plesk is 'ip address:8443'. So I figured I would go ahead and get a free SSL certificate to try to fiddle with this error. But when I generated the certificate I used my server domain name as the site name when I generated the certificate. So if I visit 'domain name:8443' all is fine, no ssl warning. But if I visit 'ip address:8443' (which I believe is what the scanner does) I get the certificate name mismatch error, Digicert's ssl checker says that the certificate name should be the ip address. Can I even generate a certificate whose common name is the ip address? I am tempted to say I should just do what the PCI scanner accepts, but what is really the correct common name to use? Anybody run into this issue before?

    Read the article

  • Is defining every method/state per object in a series of UML diagrams representative of MDA in general?

    - by Max
    I am currently working on a project where we use a framework that combines code generation and ORM together with UML to develop software. Methods are added to UML classes and are generated into partial classes where "stuff happens". For example, an UML class "Content" could have the method DeleteFromFileSystem(void). Which could be implemented like this: public partial class Content { public void DeleteFromFileSystem() { File.Delete(...); } } All methods are designed like this. Everything happens in these gargantuan logic-bomb domain classes. Is this how MDA or DDD or similar usually is done? For now my impression of MDA/DDD (which this has been called by higherups) is that it severely stunts my productivity (everything must be done The Way) and that it hinders maintenance work since all logic are roped, entrenched, interspersed into the mentioned gargantuan bombs. Please refrain from interpreting this as a rant - I am merely curious if this is typical MDA or some sort of extreme MDA UPDATE Concerning the example above, in my opinion Content shouldn't handle deleting itself as such. What if we change from local storage to Amazon S3, in that case we would have to reimplement this functionality scattered over multiple places instead of one single interface which we can provide a second implementation for.

    Read the article

  • Apple RAID configuration vs Hardware RAID

    - by James Hill
    I am researching external HDD's capable or RAID 1 to store a large amount of video content during overseas filming. After filming, the content will be brought back to the office and offloaded onto our storage server. After doing some research, I've found that I can buy an external drive with a built in RAID controller, or I can buy an external drive, with 2 HDD's, that I can configure in a RAID 1 array using the OS. RAID 1 is what we're looking for. I've done some reading on software RAID vs. hardware RAID, but the resources I've found don't discuss performance as it relates to video content or what happens to a software RAID when the computer dies. Question 1: Will the hardware RAID be more performant when dealing with large video files? Question 2: If the mac dies, does my RAID die with it (will my data be accessible on another mac)?

    Read the article

  • Acer Allionone Z5810 Touchscreen Issues 12.04

    - by Johannes
    I have an Acer Allionone Z5810, and I can't get the touchscreen to work after I install 12.04. Here is the lsusb output: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 0596:0508 MicroTouch Systems, Inc. Bus 001 Device 004: ID 04b8:0005 Seiko Epson Corp. Printer Bus 001 Device 005: ID 07ca:1336 AVerMedia Technologies, Inc. Bus 002 Device 003: ID 04ca:0058 Lite-On Technology Corp. Bus 002 Device 004: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 005: ID 04f2:b23f Chicony Electronics Co., Ltd xinput --list Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=9 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=10 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Lite-On Technology Corp. Wireless Device id=8 [slave keyboard (3)] ? USB 2.0 camera id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] My xorg.conf contains: nvidia-xconfig: X configuration file generated by nvidia-xconfig nvidia-xconfig: version 304.48 (buildmeister@swio-display x86-rhel47-04.nvidia.com) Sun Sep 9 21:31:39 PDT 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" InputDevice "TouchScreen" EndSection Section "Files" EndSection Section "InputDevice" Identifier "TouchScreen" Driver "microtouch" Option "Type" "finger" Option "Device" "/dev/ttyS3" Option "ScreenNo" "0" Option "MinX" "0" Option "MaxX" "16383" Option "MinY" "0" Option "MaxY" "16383" Option "SendCoreEvents" "yes" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Google authorship verification issue

    - by Fraser
    I'm trying to get my blog content author verified so my face gets into the Google search results. I managed to achieve this a few weeks back - When testing my content in the Google authorship testing tool it reported that I had been verified and I could see my mug in the results. All I had to do was wait a couple of weeks before I started popping up in the search results (I think(?)). However, I seem to have thrown a spanner in the works. I set up Google apps for my domain and merged my old Google+ profile into my google apps account. This seemed to reset my Google+ profile (no biggy, since it was a new profile and only had 1 connection). I re-set up my G+ account and tied it all in to my blog and it's content. I am now seeing some very strange behaviour. If you take a look at one of my blog posts through the snippet testing tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog.fraser-hart.co.uk%2Fjquery-fullscreen-background-slideshow%2F&html= You will see that it is not recognising me as an author. However, when you enter my profile URL (https://plus.google.com/108765138229426105004) into the "Authorship verification by email" input, you will see that it does in fact recognise it as verified. Now, if you try and verify the same page again, it reverts back to unverified. I thought I may have to just wait it out but this has been over a week now and previously (before I merged my profile) it happened instantaneously. Has anyone experienced this bizarre behaviour before? What is happening here? More importantly, is there anything I can do to resolve it? (Apologies for the long and boring question). Cheers!

    Read the article

  • 7zip: Add files to new folder in archive via command line?

    - by cschol
    I am using 7zip for compressing a bunch of files. The files are in a directory structure, like this: MyDir\File1 MyDir\File2 MyDir\File3 MyDir\MoreFiles\File4 MyDir\MoreFiles\File5 I want to create a 7z file with the following structure via command line: ZippedDir\File1 ZippedDir\File2 ZippedDir\File3 ZippedDir\MoreFiles\File4 ZippedDir\MoreFiles\File5 Basically, I want to zip the content of MyDir\ into a new folder called ZippedDir\. I know I could copy the content into a directory called ZippedDir\ and then zip this new directory. However, I was wondering if there was a way to avoid this extra copy step and directly zip the content, if possible, via command line.

    Read the article

  • Defunct website taken over by spammer. How to stop them?

    - by Robert
    A friend of mine used to publish a small literary fiction magazine, both in print and on the web. In 2011 she announced that she was quitting, put a note on the website, and carefully unwound the subscriptions. She continued hosting the site (with all the back-issues available for free) until the beginning of this year, when she let the hosting lapse and the domain name expire. Today she discovered that some unknown person has purchased her former domain name and put up a modified version of her entire site. The design is different but all the content is the same, including all of the back-issues of the magazine (and the stories by diverse authors contained within), their cover art, news posts, and even her contact information. All the content would have been available from Archive.org, so it's no mystery how they got it. The only thing noticeably changed is a column added to the front page titled "Favorite Videos", with around 35 links to Youtube videos. The links are named things like "Video (Worry)" and "Video (Squirting)" and the videos all feature a man named Leo giving dubious advice and promoting his life-coaching website. Here's one of the suspect videos. There does not appear to be any connection between the content of the videos and my friend or her magazine. I also posted to the Security StackExchange to ask why someone would do this and what the security risks are to her. What I want to know here is, what can she do to stop them? To be clear she doesn't want the domain name back. She just doesn't want her name and copyrighted material used deceptively. Also, what (if anything) could she have done when shutting down her website to avoid this happening?

    Read the article

  • Disqus ads are disqusting and here is how you turn them off

    - by Gopinath
    After couple of months I spent sometime yesterday reviewing my blog and coziie.com to see if everything is fine. Disqus, the best commenting system and an unusual suspect was looking weird. Commenting sections of my sites are displayed links of third party sites which I was not aware of. The content is annoying to me and I believe my site users are also annoyed. I don’t remember configuring something in disqus to display ads or earn money by promoting other’s content. Why on earth I would like to shows content of someone else’s website right inside comments section and annoy readers? Here is a screen grab of comment section that shows ads.   It turns to be disqus automatically enabled a feature called as “Discovery” to all publishers who upgraded the commenting system to the latest release. I remember upgrading commenting system to the latest release couple of months ago but I don’t remember specifically allowing disqus to spam my comment section!! I’m extremely unhappy with the way disqus automatically enabled spamming comment sections in the name of so called new features that benefits bloggers. How to turn of Discovery or Ads in Disqus I turned them off as soon as I noticed them and it’s very easy to do that. Here are the steps to be followed to turn off ads in comments Login in to disqus Switch to Settings tab Click on Discovery tab Choose the option Just comments Save the settings.  Though it’s easy to turn off the ads, it would have been nice if disqus did not enable them by default. Hey guys at disqus, you lost my trust and from now onwards I’ll double check before opting in to any new features.

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • Sound not playing on Windows XP - SoundEffect or Song: Monogame

    - by ashes999
    I'm trying to integrate sound into my Monogame game. I don't have the content pipeline hack -- just straight Monogame (Beta 3) at this point. (I tried adding the content pipeline, but ran into some issues.) I added a .wav file to my /Content directory, and I can create and instantiate both SoundEffect and Song classes. However, both show durations of 00:00:00 (on a ten-second long file), and neither plays. I can call LoadContent without any issue. But when I call Play, nothing plays. I've tried a couple of different sounds, and different formats (MP3 and WAV) to rule that out. Only WAV seems to even load without crashing out, but it doesn't play. There seems to be a GitHub issue that fixes this problem in 2.5.1. Downgrading to 2.5.1 doesn't fix this problem; it seems like it's fixed in 3.0 (_data is set in the SoundEffect instance). This issue only occurs on Windows XP. I tested it on a Windows 7 laptop, and the sound plays fine.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-09

    - by Bob Rhubart
    SOA Suite create partition in Enterprise Manager | Peter Paul van de Beek "In Oracle SOA Suite 10g, or more specific BPEL 10g, one could group functionality in domains," says Peter Paul van de Beek. "This feature has been away in the early versions of SOA Suite 11g. They have returned in more recent version and can be used for all SCA composites (instead of BPEL only). Nowadays these 10g domains are called partitions." OOW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. My presentations at Oracle Open World 2012 | Guido Schmutz The list of #OOW participants sharing their presentations grows with this post from Oracle ACE Director Guido Schmutz. You'll find Slideshare links to his presentations "Oracle Fusion Middleware Live Application Development (UGF10464)" and "Effective Fault handling in SOA Suite 11g (CON4832)." HTML Manifest for Content Folios | Kyle Hatlestad Kyle Hatlestad, solutions architect with the Oracle Fusion Middleware A-Team, shares the details on "a project to create a custom content folio renderer in WebCenter Content." Adaptive ADF/WebCenter template for the iPad | Maiko Rocha Oracle Fusion Middleware A-Team member Maiko Rocha responds to a a customer request for information about how to create an adaptive iPad template for their WebCenter Portal application, "a specific template to streamline their workflow on the iPad." Thought for the Day "I loved logic, math, computer programming. I loved systems and logic approaches. And so I just figured architecture is this perfect combination." — Maya Lin Source: Brainy Quote

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • Becoming a Certified Information Professional

    - by Lance Shaw
    Yesterday, we participated with AIIM in a webinar about the Certified Information Professional (CIP) program that they are now offering.  The interest level is very high in the program, as evidenced by the high turnout at the event. You might be asking yourself, what does the Oracle WebCenter team care about an AIIM certification program? Well, we sponsored this program because we consistently find that the more educated our customers and prospects are, the more value they are going to get out of the technology we provide.  As an ECM vendor, we provide plenty of WebCenter product training and certifications to help you make the most of WebCenter technology. While these are essential and valuable, technologists that also have an operational command of the business and the various impacts that the flow of information can have are even more valuable to an organization. Thinking about the management of content and information and its effect on business process can have wide-ranging benefits, not only to your company but to your personal bottom line.  And let's be honest, a customer who is looking holistically at how content is managed is going to see more opportunities to leverage that content and in many cases, this will motivate the purchase of additional product licenses.   Now if you are regretting the fact that you missed the webinar yesterday, never fear!  It is now available for playback and you can view it at your convenience by visiting the AIIM website. We hope you find it informative and that you can personally profit from being able to showcase your certification as an Information Professional. Additionally, we hope it will help you identify additional opportunities to leverage Oracle WebCenter in order to further reduce your operational costs and drive your business forward.

    Read the article

  • PHP - Making CMS (architecture, etc.)

    - by UnknownProgramer
    I'm in the stage of planning new CMS. Before I used WordPress and other open source CMS for my clients, but I always had to write new modules and even mess with the code in order to do certain things. Which as you understand is not the best thing to do. So I finally decided to make my own CMS to work with, the way I need. But before I start it, I would like to think it trough carefully to ensure that I won't need to rewrite it ground up, just because I forgot to include some feature into architecture or did it wrong. I would like to hear your thoughs and the most important I would like you to suggest me some articles or books on that subject, especially on architecture of such systems. I googled a few good books, but that is not enough. The way I'm planning to do it: PHP5, completely OOP, modules architecture. You make a page and add any modules you need there, but modules are not global, but local to a page so you can make two pages with the same module, but content will be different if you set different "content ID" for these two entities. But it can be set the same, so two pages has the same content of the modules put there. Also I plan to support online storage web service (like amazon S3) for images and files, so I would like to hear your thoughs on it too. Also I have not yet decided how to store language data. I don't want to use DB for that, but I haven't decided yet. Also I think I will support other DB with global DB class and separate DB wrappers for MySQL and other databases. And, well, I would appreciate any other information you can provide for that subject.

    Read the article

  • SQL restore from single file db to filegroup

    - by Mauro
    I have a 180GB MOSS 2007 database whose maintenance (i.e. backups and restores) are becoming a problem. Part of the issue can be resolved by splitting the three content sites down into their own site collections, however this will likely still leave me with a 100gb DB to deal with. Whilst this isnt entirely problematic for SQL it does mean that backups / restores take far too long. my idea is to split each of the databases to 30gb files, then to import the content into them which should distribute the content across the file groups,making it much easier / faster to backup/restore. Is there a way to backup from a single file and restore to a filegroup? If i have the wrong understanding of filegroups then I'm more than happy to find out other methods of managing the size of databases.

    Read the article

  • JavaOne India Early Bird Discount Ends April 2nd

    - by Tori Wieldt
    JavaOne India3-4 May, 2012Hyderabad International Convention Centre Register Now and Save – For A Limited Time!If you register by 2 April, you'll save INR 1080 on this premier Java technology conference. JavaOne will return for the second straight year to India May 3, 4 at the Hyderabad Convention Center. This year's line up will once again bring some of the leading experts in from all over the world as well as local Indian content. Sharat Chander (Director - Java Technology Outreach) said, "JavaOne is the premier Java technology conference in the world, for developers by developers.  Every year we keep increasing community participation in both the content selection and content delivery, and this year we expect even more."The JavaOne India tracks are:Client-Side Technologies and Rich User ExperiencesLearn about developments in Java for the desktop and practices for building rich, immersive, and powerful user experiences across multiple hardware platforms and form factors. Core Java PlatformDiscover the latest innovations in Java virtual machines. Get deep technical explanations in security and networking and enhancements that allow dynamic programming languages to drive Java platform adoption. Java EE Web Profile, Platform Technologies, Web Services, and the Cloud Update your knowledge on topics such as Web application development, persistence, security, and transactions. This track will also address modularity, enterprise caching, Web sockets, and internet identity. Mobile, Java Card, Embedded, and DevicesThis track is devoted to Java technology as the ultimate platform for mobile computing. It also covers embedded and device usages of Java technologies, including Java SE, Java ME, Java Card, and JavaFX. Share this event: #javaoneIndia

    Read the article

  • My First Post with Windows Live Writer

    - by geekrutherford
    I receive daily newsletters from DotNetSlackers regarding various .NET topics.  Today I read an article from an apparent Microsoft employee who gave some insight to the organizational culture within the company.  Always on the lookout for new tools and technologies I noted that he used Windows Live Writer for editing and managing his blog content.  I thought I’d give it a try. Let’s try adding a picture and adjusting it’s placement within this blog post relative to this text. … Adding the image is quite simple using the “Insert” options given to the right of the blog content editor.  Inserting without using a table makes aligning text just so impossible, but that’s inherent with any WYSIWYG editor.  Instead using a table with at least 2 columns (1 for text and 1 for the image) works best.   Let’s try adding a map!   That’s pretty sweet!  You can map to any location within the editor itself.  A dialog opens which utilizes Bing! allowing you to enter the address, etc. Well, that’s enough for me.  Time to pimp this to my wife for use on our family blog.  BTW, Windows Live Writer allows you to post content to a number of blogging sites…fantastic!!!

    Read the article

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Apache Prepending Header Information to ALL FILES

    - by Michael Robinson
    We're in the middle of setting up new servers, and have been having some odd problems with Apache. Apache is prepending text that looks like this: $15plðI‚‚?E?ðA™@?@??yeÔ|~Ÿ²?PγZ" zS€?8i³?? ,ÀŠ{ÿBHTTP/1.1 200 OK Date: Mon, 02 Feb 2009 22:28:05 GMT Server: Apache/2.2.3 (CentOS) Last-Modified: Mon, 02 Feb 2009 22:28:05 GMT ETag: W/"1238007d-2224e-fe617f40" Accept-Ranges: bytes Content-Length: 139854 Connection: close Content-Type: application/x-javascript To all files. The file I copied the above text from is the prototype library js file. As loaded from our server. I've searched, but couldn't find much about this problem Maybe I don't know what I'm searching for... Anyway, if anyone has seen this behaviour before, could they please let me know either 1) how to fix it so that this content is not prepended to all files, or 2) where to look for further help. Thanks

    Read the article

  • Saving Dragged Dropped items position on postback in asp.net [closed]

    - by Deeptechtons
    Ok i saw many post's on how to serialize the value of dragged items to get hash and they tell how to save them. Now the question is how do i persist the dragged items the next time when user log's in using the has value that i got eg: <ul class="list"> <li id="id_1"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_2"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_3"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_4"> <div class="item ui-corner-all ui-widget"> </div> </li> </ul> which on serialize will give "id[]=1&id[]=2&id[]=3&id[]=4" Now think that i saved it to Sql server database in a single field called SortOrder. Now how do i get the items to these order again ? the code to make these sort is below,without which people didn't know which library i had used to sort and serialize <script type="text/javascript"> $(document).ready(function() { $(".list li").css("cursor", "move"); $(".list").sortable(); }); </script>

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >