Search Results

Search found 41795 results on 1672 pages for 'hidden files'.

Page 1426/1672 | < Previous Page | 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433  | Next Page >

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Wireless connection by using command screen

    - by Amadeus
    I installed Ubuntu 12.04 armhf to my beagleboard-xm and now trying to connect to a wireless network. First, I checked if I can search for available networks: ubuntu@arm:~$ iwlist scan lo Interface doesn't support scanning. usb0 Interface doesn't support scanning. wlan0 Scan completed : Cell 01 - Address: EA:7D:EF:60:C9:0B Channel:1 Frequency:2.412 GHz (Channel 1) Quality=70/70 Signal level=-23 dBm Encryption key:on ESSID:"ghostrider" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Ad-Hoc Extra:tsf=000000005a1ab50e Extra: Last beacon: 6242ms ago IE: Unknown: 000A67686F73747269646572 IE: Unknown: 010882848B962430486C IE: Unknown: 030101 IE: Unknown: 06020000 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: Unknown: 32040C121860 IE: Unknown: 2D1A2C181BFF00000000000000000000000000000000000000000000 IE: Unknown: 3D16010800000000FF000000000000000000000000000000 IE: Unknown: DD09001018020000000000 Then I edited /etc/network/interfaces file to the following: root@arm:/etc/wpa_supplicant# cat /etc/network/interfaces auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # Example to keep MAC address between reboots #hwaddress ether DE:AD:BE:EF:CA:FE # WiFi Example auto wlan0 iface wlan0 inet dhcp wpa-ssid "ghostrider" wpa-psk "b34d373eb2fb836a43b0afffe783c7d0af694724506c9e77b06d1021302905bf" But I cannot still connect to the wireless network: root@arm:/etc/wpa_supplicant# iwconfig lo no wireless extensions. usb0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Asociated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on eth0 no wireless extensions. root@arm:/etc/network# ifup wlan0 Failed to bring up wlan0. What is wrong? Should I change any other files also? But I think it was enough. By the way if you are curious about where that wpa-psk came from: zero@ghostrider:~$ wpa_passphrase ghostrider 34bddf67c2 network={ ssid="ghostrider" #psk="34bddf67c2" psk=b34d373eb2fb836a43b0afffe783c7d0af694724506c9e77b06d1021302905bf } I will appreciate any effort to help. Regards, Amadeus ps: Also I tried to connect manually: root@arm:/etc/network# iwconfig wlan0 essid ghostrider key s:34bddf67c2 But this did not solve my problem also.

    Read the article

  • Brain Teaser: How Did I Do This (Part 1: The Solution)

    - by Geertjan
    In Part 1: The Challenge, published this time last week, I introduced a "brain teaser". The brain teaser asks you to figure out how to allow images and other files to be meaningfully dropped onto a NetBeans Platform application, i.e., on the drop something useful should happen with the dropped file: if the file is an image, the image should open in the IDE; if the file is a PDF document, the PDF viewer should open externally; if the file is a text file, it should open as a text in the IDE, etc. Solution. And here is the solution: http://bits.netbeans.org/dev/javadoc/org-openide-windows/org/openide/windows/ExternalDropHandler.html When an implementation of the "ExternalDropHandler" class is available in the global Lookup, and an object is being dragged over some part of the main window, the window system may call the methods of this class to decide whether it can accept or reject the drag operation. And when the object is actually dropped, this class will be asked to handle the drop. OK, so go ahead and implement the above class and put it into the Lookup. Or... guess what? The NetBeans Platform has a default implementation of the above class, appropriately named "DefaultExternalDropHandler". Not only is this useful to learn about how to implement the ExternalDropHandler class (i.e., by reading the source here): you can simply include the module that contains this class in your own NetBeans Platform application and then your application will be able to receive external drag/drop events and do something meaningful with them thanks to the DefaultExternalDropHandler. Do this: Open your NetBeans Platform application in NetBeans IDE. Right-click the application in the Projects window and choose Properties. In the Libraries tab, expand the "ide" cluster, and select "User Utilities". (That's where "DefaultExternalDropHandler.java" is found and registered in the Lookup.) Now click the "Resolve" button, if it appears, because some additional related modules need to now be included, if they haven't been included yet. Again in the "ide" cluster in the Libraries tab, select "Image". That's the Image Editor. Click OK. Run the application. Drag an image or some other type of file into your application, from outside the application, and you'll see the application tries to handle the drop. If the file being dragged is an image, it will open in the Image Editor, which you included in the previous step of these instructions. Hurray, you're done. Without any programming at all, you've added a cool new feature to your application.

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • Unable to boot into 11.10 in normal (get a black screen) OR recovery (get just a flashing cursor, no cli)

    - by user1092284
    Okay, so I had a dual boot of Tango Studio (based on Ubuntu 10.04) and Windows XP. Yesterday I downloaded the .iso for Ubuntu 11.10 and attempted to install from a USB (my BIOS won't normally boot from USB but I had PLOP boot manager on a CD). I booted up Ubuntu from the USB and then from there formatted the partition with Tango on and installed Ubuntu 11.10. On booting up I came into Grub rescue mode. So I booted up from the USB again and used boot-repair to reinstall Grub. After this I would see the normal Grub menu, but on choosing Ubuntu I would come to a black screen. On choosing recovery mode it would begin starting normally with no obvious errors but instead of coming to a cli I would just get a blank screen with a flashing cursor on the top left, not accepting any input. I have since reformatted and reinstalled from a CD rather than USB and had the exact same problem. I used boot-repair again and the result is the same. Output of the most recent boot-repair is at http://paste.ubuntu.com/869805/ I have also tried editing the ubuntu grub entry and replacing quiet with text nomodeset as I saw in an answer to another question. This got me a bit further - I saw the purple ubuntu loading screen but still came to a blank screen after that. Anyway, in most of the other questions in which that is brought up the user is still able to boot into recovery, while I am not. Can anyone help? Thanks in advance, let me know if there's any more info I need to provide! EDIT FOR MORE INFO: I read something saying it's quiet splash that should be replaced with nomodeset. Earlier I had left splash in the line. So i tried it this way and it froze after displaying the following text: fsck from util-linux 2.19.1 mountall: Plymouth command failed mountall: Disconnected from Plymouth /dev/sda5:clean, 139359/1741488 files, 745830/6961125 blocks From a bit of googling it doesn't look like plymouth is essential, but I've checked and I do have the most current versions of mountall and plymouth installed so I don't know why there's a problem EDIT FOR MORE INFO AGAIN: I used dkpg --reconfigure plymouth cause I saw it mentioned in another forum and it still says plymouth command failed on boot

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • Play Framework Plugin for NetBeans IDE

    - by Geertjan
    The start of minimal support for the Play Framework in NetBeans IDE 7.3 Beta would constitute (1) recognizing Play projects, (2) an action to run a Play project, and (3) classpath support. Well, most of that I've created already, as can be seen, e.g., below you can see logical views in the Projects window for Play projects (i.e., I can open all the samples that come with the Play distribution). Right-clicking a Play project lets you run it and, if the embedded browser is selected in the Options window, you can see the result in the IDE. Make a change to your code and refresh the browser, which immediately shows you your changes: What needs to be done, among other things: A wizard for creating new Play projects, i.e., it would use the Play command line to create the application and then open it in the IDE. Integration of everything available on the Play command line. Maybe the logical view, i.e., what is shown in the Projects window, should be changed. Right now, only the folders "app" and "test" are shown there, with everything else accessible in the Files window, as can be seen in the screenshot above. More work on the classpath, i.e., I've hardcoded a few things just to get things to work correctly. Options window extension to register the Play executable, instead of the current hardcoded solution. Scala integrations, i.e., investigate if/how the NetBeans Scala plugin is helpful and, if not, create different/additional solutions. E.g., the HTML templates are partly in Scala, i.e., need to embed Scala support into HTML. Hyperlinking in the "routes" file, as well as special support for the "application.conf" file. Anyone interested, especially if you're a Play fan (a "playboy"?), in joining me in working on this NetBeans plugin? I'll be uploading the sources to a java.net repository soon. It will be here, once it has been made publicly accessible: http://java.net/projects/nbplay/sources/nbplay Kind of cool detail is that the NetBeans plugin is based on Maven, which means that you could use any Maven-supporting IDE to work on this plugin.

    Read the article

  • LibGDX - Textures rendering at wrong position

    - by ACluelessGuy
    Update 2: Let me further explain my problem since I think that i didn't make it clear enough: The Y-coordinates on the bottom of my screen should be 0. Instead it is the height of my screen. That means the "higher" i touch/click the screen the less my y-coordinate gets. Above that the origin is not inside my screen, atleast not the 0 y-coordinate. Original post: I'm currently developing a tower defence game for fun by using LibGDX. There are places on my map where the player is or is not allowed to put towers on. So I created different ArrayLists holding rectangles representing a tile on my map. (towerPositions) for(int i = 0; i < map.getLayers().getCount(); i++) { curLay = (TiledMapTileLayer) map.getLayers().get(i); //For all Cells of current Layer for(int k = 0; k < curLay.getWidth(); k++) { for(int j = 0; j < curLay.getHeight(); j++) { curCell = curLay.getCell(k, j); //If there is a actual cell if(curCell != null) { tileWidth = curLay.getTileWidth(); tileHeight = curLay.getTileHeight(); xTileKoord = tileWidth*k; yTileKoord = tileHeight*j; switch(curLay.getName()) { //If layer named "TowersAllowed" picked case "TowersAllowed": towerPositions.add(new Rectangle(xTileKoord, yTileKoord, tileWidth, tileHeight)); // ... AND SO ON If the player clicks on a "allowed" field later on he has the opportunity to build a tower of his coice via a menu. Now here is the problem: The towers render, but they render at wrong position. (They appear really random on the map, no certain pattern for me) for(Rectangle curRect : towerPositions) { if(curRect.contains(xCoord, yCoord)) { //Using a certain tower in this example (left the menu out if(gameControl.createTower("towerXY")) { //RenderObject is just a class holding the Texture and x/y coordinates renderList.add(new RenderObject(new Texture(Gdx.files.internal("TowerXY.png")), curRect.x, curRect.y)); } } } Later on i render it: game.batch.begin(); for(int i = 0; i < renderList.size() ; i++) { game.batch.draw(renderList.get(i).myTexture, renderList.get(i).x, renderList.get(i).y); } game.batch.end(); regards

    Read the article

  • Rebuilding a Mac Mini (early 2009)

    - by Kelly Jones
    This weekend I decided to rebuild the family’s Mac Mini.  It’s the early 2009 model and I hadn’t done it since we got it in March of 2009.  Even worse, I had done the import data step (or whatever Apple calls it) which brought over all of the data files and apps from our previous Mac.  AND that install goes back to before 2005, as far as I can remember.  SO, to say that “cruft” had built up in the operating system, is probably a bit of an understatement. The rebuild went pretty smoothly, especially since I had a couple of spare hard drives.  I hooked up a spare USB drive and formatted it for use with the Mac.  I then used Carbon Copy to clone the internal hard drive onto the USB drive.  (Carbon Copy is a great little app that I used several years ago and I was happy to see it was not only still around, but updated as well.) Once I had my backup, I shut down the Mac and replaced the internal hard drive.  I had purchased the hard drive last fall to use with my work laptop, but I got a new work laptop (with awesome dual SSDs) so I wasn’t using it anymore.  The replacement drive (Seagate Momentus 7200.4 ST9500420AS 500GB 7200 RPM 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive) has more than double the original’s capacity and is also faster.  I’ll have to keep an eye on the temperature, since that 7200 drive will run hotter. Opening the Mac Mini is not for the easily intimidated!  That cool little case is quite the pain to open.  Luckily, OWC put a video together here.  After replacing the drive, I then installed a clean copy of OS 10.5 using the DVDs that came with the Mac.  After the OS, it was time to reinstall the apps.  I downloaded some of the freeware, just to make sure I had the latest versions.  For the rest, I just copied from the backup cloned drive to the new drive.  (I love the way most Mac apps are written – with almost everything contained within a “package” that I can just copy from one drive to another.  MUCH better than the Windows way of using shared DLLs and the registry to store critical pieces that the app needs in order to run!) The whole process took longer than I would have preferred, but it was long overdue.  It definitely “feels” faster, especially boot time and application launches.

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

  • UPK 11.1 Content Development Training Offering from Oracle University

    - by user581320
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle University has announced the availability of UPK 11.1 Content Development training courses.  These classroom and virtual classroom courses are available starting later this month.  This course is designed for course authors, editors, and other individuals in need of recording and editing content in the User Productivity Kit Developer. Through hands-on exercises, participants will learn how to build an outline, prepare for and record content in the target application, and use the Topic Editor to customize recorded content. Students will learn how to link web pages and external files to their content as well as create and link questions and assessments. Upon completion, participants will preview their content in the available playback modes before publishing. They will also explore the various deployment options for publishing, including the options for printed documents, and learn how to customize templates for player and print output. For information about the UPK 11.1 Content Development Training course, including offering dates and locations, visit the Oracle University web page for the course.

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Problem with DualBooting Ubuntu 13.10 and Win7

    - by VinArrow
    this is my first post here on AskUbuntu, not first time using it though. I wanted to install Ubuntu 13.10 on my PC to have all my work stuff there and leave Win7 for gaming. So i did my research on how to Dual Boot when you already have Win7 installed, here are the steps i took Used Disk Management on Win7 and shrunk that partition, leaving 80GB free for Ubuntu. Made a Bootable pendrive following the instructions on Ubuntu`s website. During the installation steps there was supposed to be a Install alongside Win7, but there wasnt, so i chose Something else. Everything was fine and i was able to install Ubuntu no problem on my unallocated 80GB partition (76GB Ubuntu + 4GB swap) There was a prompt for me to restart my PC and so I did expecting to see the dual boot screen (grub right?) Now, when i restarted my PC, Grub never showed up and it booted straight to Windows. Then I did some more research and found out that that could happen. Tried three things then Plugging in bootable pendrive again and selected Try Ubuntu without installing. Then i followed some instructions found here (How can I repair grub? (How to get Ubuntu back after installing Windows?)) and i could chroot into my Ubuntu install just fine. Repaired grub as instructed on that link, restarted the PC and booted straight into Win7 again. Again, used the bootable pen drive to Try Ubuntu... and used the Boot-repair tool (recommended repair). Again, booted straight into Win7. Lastly, i installed easyBCD on my Win7 and made a new entry for Ubuntu (Linux/BSD). When i rebooted the PC, there was the option to choose between Win7 and Linux, chose linux and it didnt work, taking me straight to a command line-like enviroment that read Minimum bash like scripting or something, as if I didn`t have a Linux OS installed. So, I thought I`d try and repair my Ubuntu install. And during the Installation method step there was the choice to install alongside Ubuntu 13.10! and that right there drove me crazy. Here is a screenshot of gparted showing how things are set up now http://imageshack.us/f/801/77u3.png/ Notice on the left-hand side how i can access my installation files just fine. sdb1- win7 reserved space, sdb2- win7 OS, sdb3- 76GB ubuntu install, sdb5- 4GB swap area. Does anyone know why my Ubuntu 13.10 is not being recognized? and what should I do to get it working? Thanks and sorry for the long read and bad english! (BIOS = legacy)

    Read the article

  • High traffic chat - how to check if there is new message and show it for all users

    - by user2633999
    I already had question about this but obviously it was not accepted very well, apparently too long when it's actually more information so you could have given me better answer. Ok, I will be much clearer now. Best possible logic to develop scalable chat in terms of stability, storing/reading messages on chat, updating chat on new message for all users etc.? I have most of this developed, the logic I think I miss is -- check if there is new message and show it for all users. I have this implemented but it crashes the site due to its traffic of 300k-400k people, so that's my main question. The chat is PHP based and uses Pusher (www.pusher.com) for instant messaging but it lacks what I need because it's more like a websocket. I'm using hardcoded files to keep messages (want to avoid database as much as possible). It's a no extension type of file, I'm sure you know. I'm getting crash with $fp = fopen(..., "w"); // pretend ... is the path and filename fwrite($fp, $msg); //hardcode the message fclose($fp); where $msg is the message itself. I'm having 1 file per message. I show last 150 messages = 150 file accesses and reads, yeah it's too much I guess. I have better logic now which I'm pursuing and that is 1 file with last 50-100 messages at all time. Sure it should be much better. How does it crash, that's the trickiest part because everything seems ordinary, believe me it is difficult to determine what exactly crashes the site, but in like 5 minutes when I try to open the site it's gone, then I put the old content without chat and is back online again. I'm having jquery post every 1 second to check if there is new message. I'm using timestamp in a special file where I keep the time last message was sent and if ((time() - time in file) <= 2) = reload last 150 messages including the last one. Too much input/output, write/read or however to say it I think is what crashes the site.

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • Deleted Myself from Admin Group - Now Getting Error usermod: cannot lock /etc/passwd; try again later

    - by BubbaJ
    I have a laptop with Ubuntu 11.10 that is shared between myself and two other family members. My user id was setup as the only "Administrator" on the laptop. The other users were setup as "Standard" users. In my attempt to try to add myself to the user groups for the other users, I somehow deleted myself from the admin groups. I used the "usermod" command from the terminal. I must have neglected to include the proper switches or syntax for the update. It looks like I successfully added my userid to the group associated with my wife's account. When I use the "groups" command, I can see only my id and my wife's id in the list. I no longer see the "admin" or "adm" groups, and others that used to be listed. When I go into System Settings User Accounts it looks like my ID is now listed as a "Standard" user. I would like to change my account back to "Administrator", but now I can't. I did some searches for solutions and found that I would need to boot into Recovery Mode and execute the usermod command from the root session. I was able to successfully boot into Recovery Mode and get to the root session. I was trying to execute the command "usermod -a -G admin user1" to add my id (user1) back to the admin group. When I execute the command from the root session, I get the error message "usermod: cannot lock /etc/passwd; try again later". I tried preceding the usermod command with "sudo", but it didn't make a difference, same error. I then tried adding a new user using adduser, thinking I would try to create a new userid and make the new userid part of the admin group. I get the same error using the adduser command. I saw some posts that recommend looking for and deleting files that end in ".lock" in the etc directory. The only file I found was .pwd.lock which I haven't touched. I am at a loss as to what to try next. I am relatively inexperienced with Ubuntu and Linux, so alot of this is new to me. Any help you can provide would be much appreciated.

    Read the article

  • Need Help with partitions and such Dual Booting 13.10 w/ windows 7

    - by Aymax
    so I've spent the past couple days freaking out about this and looking for answers, and I decided to resort to asking on forums. probably should have before I wasted 2 entire days. so I am trying to DUAL-INSTALL Ubuntu 13.10 with my x64 Windows 7 home premium computer. I have 6 gb of ram, 1TB hard drive, and a 3.3GHZ dual core processer (just in case it matters). I've managed to figure some things out. I've burned the ubuntu files onto a DVD, and I have been able to successfully run it off the disk. I also shrunk my Windows partition by 120Gb and partitioned that for Ubuntu (all using the windows Disk Manager). Problems: When I turn my computer on with the DVD in the tray, the computer cant find windows. it flashes a screen real quick that says something about not being able to find an operating system, and then goes to "grub" and asks what i want to do with Ubuntu. this scares me, because I don't know if that means that I will not be able to boot windows if I install Ubuntu. The Ubuntu 13.10 installer does not detect my Windows operating system. I only have the options to Erase everything on my drive, or "something else." I choose that, which brings me to 3 I don't understand the partition table. I have no idea which drive im selecting to install stuff on, much less which one to select. I tried to tell by the amount of memory partitioned off, but none of the numbers seem to be accurate. Plus, all the names are dev/sda(#). I know Ubuntu knows the name of my partition, because on the sidebars it shows the names of the different drives, including the partition I made; so why don't they use the names? I have no idea what I'm going to be erasing. I've read that I should know which is which by the file system type, but they are all NTFS, including the one I made. my only other option was FAT, none for EXT2 or any of that like people said to do. My main concerns are that of accidentally erasing windows or not being able to access windows. any feasible solution is helpful, weather it helps me with the install or to make Ubuntu see windows. I realize this question has been asked much, but i have found no feasible answers so far. I am relatively new to this, and have never installed an operating system before, so I do not know most of the jargon. please keep it relatively simple, please. I am not a programmer. Thanks.

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

  • Finding "Stuff" In OUM

    - by Dave Burke
    One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?” Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind. Here are three quick tips I give people: 1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word. For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture This is fast and easy option to use, but it only searches the current OUM page 2. Use the PDF view of OUM Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack. This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel. 3. Use your operating systems file index to search for key words This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC. All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6 Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files. Happy searching!

    Read the article

  • ADF Real World Developers Guide Book Review

    - by Grant Ronald
    I'm half way through my review of "Oracle ADF Real World Developer's Guide" by Jobinesh Purushothaman - unfortunately some work deadlines de-railed me from having completed my review by now but here goes.  First thing, Jobinesh works in the Oracle Product Management team with me, so is a colleague. That declaration aside, its clear that this is someone who has done the "real world" side of ADF development and that comes out in the book. In this book he addresses both the newbies and the experience developers alike.  He introduces the ADF building blocks like entity objects and view obejcts, but also goes into some of the nitty gritty details as well.  There is a pro and con to this approach; having only just learned about an entity or view object, you might then be blown away by some of the lower details of coding or lifecycle.  In that respect, you might consider this a book which you could read 3 or 4 times; maybe skipping some elements in the first read but on the next read you have a better grounding to learn the more advanced topics. One of the key issues he addresses is breaking down what happens behind the scenes.  At first, this may not seem important since you trust the framework to do everything for you - but having an understanding of what goes on is essential as you move through development.  For example, page 58 he explains the full lifecycle of what happens when you execute a query.  I think this is a great feature of his book. You see this elsewhere, for example he explains the full lifecycle of what goes on when a page is accessed : which files are involved,the JSF lifecycle etc. He also sprinkes the book with some best practices and advice which go beyond the standard features of ADF and really hits the mark in terms of "real world" advice. So in summary, this is a great ADF book, well written and covering a mass of information.  If you are brand new to ADF its still valid given it does start with the basics.  But you might want to read the book 2 or 3 times, skipping the advanced stuff on the first read.  For those who have some basics already then its going to be an awesome way to cement your knowledge and take it to the next levels.  And for the ADF experts, you are still going to pick up some great ADF nuggets.  Advice: every ADF developer should have one!

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Are we queueing and serializing properly?

    - by insta
    We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. The server is at 16GB of memory usage and growing, just for queueing. Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment? I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of "1000 per minute" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages.

    Read the article

  • What is the right way to group this project into classes?

    - by sigil
    I originally asked this on SO, where it was closed and recommended that I ask it here instead. I'm trying to figure out how to group all the functions necessary for my project into classes. The goal of the project is to execute the following process: Get the user's FTP credentials (username & password). Check to make sure the credentials establish a valid connection to the FTP server. Query several Sharepoint lists and join the results of those queries to create a list of items that need to have action taken on them. Each item in the list has a folder. For each item: Zip the contents of the folder. Upload the folder to the FTP server using SFTP Update the item's Sharepoint data. Email the user an Excel report showing, e.g., Items without folder paths Items that failed to zip or upload Steps 2-5 are performed on a periodic basis; if step 2 returns an invalid connection, the user is alerted and the process returns to step 1. If at any point the user presses a certain key, the process terminates. I've defined the following set of classes, each of which is in its own .cs file: SFTP: file transfer processes DataHandler: Sharepoint data retrieval/querying/updating processes. Also makes and uploads the zip files. Exceptions: Not just one class, this is the .cs file where I have all of my exception classes. Report: Builds and sends the report. Program: The main class for running the program. I recognize that the DataHandler class is a god object, but I don't have a good idea of how to refactor it. I feel like it should be more fine-grained than just breaking it into Sharepoint, Zip, and Upload, but maybe that's it. Also, I haven't yet worked out how to combine the periodic behavior with the "wait for user input at any point in the process" part; I think that involves threads, which means other classes to manage the threads... I'm not that well-versed in design patterns, but is there one that fits this project well? If this is too big of a topic to neatly explain in an SO answer, I'll also accept a link to a good tutorial on what I'm trying to do here.

    Read the article

< Previous Page | 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433  | Next Page >