Search Results

Search found 3053 results on 123 pages for 'jeff james'.

Page 8/123 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Eclipse Java Profiler

    - by Jeff Storey
    I know this has been asked before, but I have not found anything recent that really gives a good answer. I'm trying to find a free profiler for eclipse that works well. I would like a graphical breakdown of execution time in particular. I've tried TPTP but have had no luck at all with GUI apps (it took almost a minute for a GUI app to start and was virtually unusable on screen - it uses a lot of Java OpenGL, so I'm not sure if it has to do with that). I liked YourKit, but unfortunately it's not free. I even tried switching to NetBeans since they have a built in profiler. If anyone has had success with particular profilers (even if it was TPTP), I'd like to hear about it. Any recommendations would be greatly appreciated. thanks, Jeff

    Read the article

  • groovy call private method in Java super class

    - by Jeff Storey
    I have an abstract Java class MyAbstractClass with a private method. There is a concrete implementation MyConcreteClass. public class MyAbstractClass { private void somePrivateMethod(); } public class MyConcreteClass extends MyAbstractClass { // implementation details } In my groovy test class I have class MyAbstractClassTest { void myTestMethod() { MyAbstractClass mac = new MyConcreteClass() mac.somePrivateMethod() } } I get an error that there is no such method signature for somePrivateMethod. I know groovy can call private methods but I'm guessing the problem is that the private method is in the super class, not MyConcreteClass. Is there a way to invoke a private method in the super class like this (other than using something like PrivateAccessor)? thanks Jeff

    Read the article

  • Application Server or Lightweight Container?

    - by Jeff Storey
    Let me preface this by saying this is not an actual situation of mine but I'm asking this question more for my own knowledge and to get other people's inputs here. I've used both Spring and EJB3/JBoss, and for the smaller types of applications I've built, Spring (+Tomcat when needed) has been much simpler to use. However, when scaling up to larger applications that require things like load balancing and clustering, is Spring still a viable solution? Or is it time to turn to a solution like EJB3/JBoss when you start to get big enough to need that? I'm not sure if I've scoped the problem well enough to get a good answer, so please let me know. Thanks, Jeff

    Read the article

  • Nibernate, DynamicProxy, and Spring AOP

    - by jeff
    We have an Spring IOC managed application that uses NHibernate in its persistence layer. We have use the Spring AOP and understand its terminology and capabilities. We have some investment in Spring proxies. Now, we want to add a PropertyChangedMixin and a ValidatorInterceptor (not nhibernate validator, but based on Spring validation) onto our NHibernate managed objects. I've looked at the hooks for NHiberate IInterceptor and EventListeners and that gives me a place to apply the desired proxies. If I use the Spring proxies is it going to play nice with the existing nhibernate proxies. We don't lazy load. From the simple nhibernate stuff the benefits of DynamicProxy look appealing. I can go either way, but I'd like to hear suggestions. Thanks, jeff

    Read the article

  • JavaEE Application Server or Lightweight Container?

    - by Jeff Storey
    Let me preface this by saying this is not an actual situation of mine but I'm asking this question more for my own knowledge and to get other people's inputs here. I've used both Spring and EJB3/JBoss, and for the smaller types of applications I've built, Spring (+Tomcat when needed) has been much simpler to use. However, when scaling up to larger applications that require things like load balancing and clustering, is Spring still a viable solution? Or is it time to turn to a solution like EJB3/JBoss when you start to get big enough to need that? I'm not sure if I've scoped the problem well enough to get a good answer, so please let me know. Thanks, Jeff

    Read the article

  • Perl Regex Output only characters that can be used as unix filename

    - by Jeff Balinsky
    I wrote a basic mp3 organizing script for myself. I know the power of regex but I suck with the syntax I have the line $outname = "/home/jebsky/safehouse/music/mp3/" . $inital . "/" . $artist . "/" . $year ." - ". $album . "/" . $track ." - ". $artist ." - ". $title . ".mp3"; All I want is a regex to change $outname so that any non safe for filename characters get replaced by an underscore Thanks Jeff

    Read the article

  • How to customize OOTB workflow emails

    - by Jeff
    How can I make simple format-type customizations to ALL OOTB workflow related emails? I have found that many pre-Sharepoint 2010 posts indicated that OOTB workflow emails are in fact 'alerts', and therefore OOTB workflow emails could be customized using the same technique which is: making a customized version of alerttemplates.xml and even using IAlertNotifyHandler to intercept all alert emails. However, it seems that OOTB workflow and workflow task emails are not affected by changes to my customalerttemplates.xml file (which I do follow with stsadm updatealerttemplates, iisreset, and timer service restart). This is what I used as a guide to customize alerts: http://blogs.msdn.com/b/sharepointdeveloperdocs/archive/2007/12/14/how-to-customizing-alert-emails-using-ialertnotificationhandler.aspx What am I missing? Is there a separate template for workflow emails? Can OOTB workflow emails be customized? Thanks! Jeff

    Read the article

  • Getting mysql row that doesn't conflict with another row

    - by user939951
    I have two tables that link together through an id one is "submit_moderate" and one is "submit_post" The "submit_moderate" table looks like this id moderated_by post 1 James 60 2 Alice 32 3 Tim 18 4 Michael 60 Im using a simple query to get data from the "submit_post" table according to the "submit_moderate" table. $get_posts = mysql_query("SELECT * FROM submit_moderate WHERE moderated_by!='$user'"); $user is the person who is signed in. Now my problem is when I run this query, with the user 'Michael' it will retrieve this 1 James 60 2 Alice 32 3 Tim 18 Now technically this is correct however I don't want to retrieve the first row because 60 is associated with Michael as well as James. Basically I don't want to retrieve that value '60'. I know why this is happening however I can't figure out how to do this. I appreciate any hints or advice I can get.

    Read the article

  • Grails/Spring HttpServletRequest synchronization

    - by Jeff Storey
    I was writing a simple Grails app and I have a spot in a gsp where one of my java beans in modified. <g:each in="${myList}" status="i" var="myVar"> // if the user performs some view action, update one of the myVar elements </g:each> This works, but I don't think it's quite threadsafe. myList is an http request variable but in cases of pages that use ajax (or other client side manipulations), it is possible for two threads to be modifying the same request scope variable The Spring AbstractController class provides a setSynchronizeOnSession method. Does grails provide any equivalent functionality? If not, what's the best way to protect this non-threadsafe mutation? thanks, Jeff

    Read the article

  • Replacing Resource files on new BlackBerry app version

    - by Jeff
    Hello, I am maintaining an existing BlackBerry application (implemented as a MIDlet). The application contains a number of data files that get bundled with the app as resources. Some of these data files need to be updated for a new version of the app. When the user goes to install a new version of the application (via URL of Jad file), it prompts them with the following message "Persistent data exists for the application. Would you like to retain this data? " If the user selects "Yes", it looks like the app continues to use the old resource files. This is so surprising to me. First of all, am I losing my mind or will an upgrade really not overwrite existing resource files? Is there any way I can force it to? Thanks, Jeff

    Read the article

  • How do I chain forms in Access? (pass values between them)

    - by jeff porter
    Hello, I'm using Access 2007 and have a data model like this... Passenger - Bookings - Destinations So 1 Passenger can have Many Bookings, each for 1 Destinations. My problem... I can create a form to allow the entry of Passenger details, but I then want to add a NEXT button to take me to a form to enter the details of the Booking (i.e. just a simple drop list of the Destinations). I've added the NEXT button and it has the events of RunCommand SaveRecord OpenForm Destination_form BUT, I cant work out how to pass accross to the new form the primary key of the passenger that was just entered (PassengerID). I'd really like to have just one form, and that allow the entry of the Passenger details and the selection of a Destination, that then creates the entries in the 2 Tables (Passenger & Bookings), but I can't get that to work either. Can anyone help me out please? Thanks Jeff Porter

    Read the article

  • Delphi VirtualStringTree - Check for Duplicates?

    - by Jeff
    Hello S.O! Yeah, I know I post a lot of questions, but thats because I either need assurance that I am doing it right, what I am doing wrong, or if I am totally clueless, and cant find anything in the documentation. Anyways, I am trying to check for duplicate nodes. Here is how I would want to do it: Loop thru my nodes, and compare each single node's text (record), but if I got many nodes, wouldnt that be too time and memory consuming? Would there be a better approach for this? Thanks! - Jeff.

    Read the article

  • Create a buffered image from rgb pixel values

    - by Jeff Storey
    I have an integer array of RGB pixels that looks something like: pixels[0] = <rgb-value of pixel(0,0)> pixels[1] = <rgb-value of pixel(1,0)> pixels[2] = <rgb-value of pixel(2,0)> pixels[3] = <rgb-value of pixel(0,1)> ...etc... And I'm trying to create a BufferedImage from it. I tried the following: BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); img.getRaster().setPixels(0, 0, width, height, pixels); But the resulting image has problems with the color bands. The image is unclear and there are diagonal and horizontal lines through it. What is the proper way to initialize the image with the rgb values? thanks, Jeff

    Read the article

  • creating a wordpress dev enviornment and uploading to production

    - by Jeff
    I am an old school java developer who is considering a using wordpress. I'm used to developing locally on my PC (yeah yeah not even a mac) and then ftping my files up to a production environment on a remote server. My high level review of wordpress gives me the impression that typically there is no concept of lower environments and that all updates occur directly in production. Is this the case? If not, can someone explain how one goes about uploading the files to a web site? Thanks, Jeff

    Read the article

  • What are the right reverse PTR, domain keys, and SPF settings for two domains running the same appli

    - by James A. Rosen
    I just read Jeff Atwood's recent post on DNS configuration for email and decided to give it a go on my application. I have a web-app that runs on one server under two different IPs and domain names, on both HTTP and HTTPS for each: <VirtualHost *:80> ServerName foo.org ServerAlias www.foo.org ... </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName foo.org ServerAlias www.foo.org </VirtualHost> <VirtualHost *:80> ServerName bar.org ServerAlias www.bar.org ... </VirtualHost> <VirtualHost 2.3.4.5:443> ServerName bar.org ServerAlias www.bar.org </VirtualHost> I'm using GMail as my SMTP server. Do I need the reverse PTR and SenderID records? If so, do I put the same ones on all of my records (foo.org, www.foo.org, bar.org, www.bar.org, ASPMX.L.GOOGLE.COM, ASPMX2.GOOGLEMAIL.COM, ..)? I'm pretty sure I want the domain-keys records, but I'm not sure which domains to attach them to. The Google mail servers? foo.org and bar.org? Everything?

    Read the article

  • SVNParentPath directory authorization

    - by James
    The question is a bit stupid but I can't get it sorted. I have a server with SVN that uses the SVNPath directive in httpd.conf and all works fine with path authorizations. Now I'm installing a second serer where I'm going to use SVNParentPath directive and I've got it all running except I can't get the authorization part quite right. From what I understand it's the same as when you use SVNPath but you need to specificy the repo name before the folder name.. My SVNParentPath is /srv/svn/ and I created a directory /srv/svn/testproj and then ran svnadmin create /srv/svn/testproj Now i'm configuring my authorization file: [/] * = svnadmin = rw adusgi = rw [testproj:/svn/testproj] demada = rw degari = rw scarja = rw Now if I try to commit /svn/testproj using user svnadmin or adusgi all is fine. If I try for example demada it doesn't work... (I've run the htpasswd2 commands for the user obviously. The directory is correct or atleast thats how I use the directory with the SVNPath server thats already running, the part I think I'm getting wrong is the repo name, I just used the directory name but what am I really supposed to put there?? Thank you, James

    Read the article

  • How to install a desktop environment onto Ubuntu Server -- but without internet access or a CDROM?

    - by James
    I am playing around with a computer which has no CDROM drive or internet access and I have installed Ubuntu Server onto it. I have that all up and running nicely but now I'd like to install Xfce, GNOME or something similar so I can load up a desktop environment from the command line if I wish. Obviously with internet access or a CDROM, this would be a simple task of using apt-get and it finding & retrieving the packages for me, I assume, but I do not have either. I do however have a USB drive and I have used Unetbootin to make it into a bootable drive with the Ubuntu Server disk image files on there. I have mounted the USB drive to /media/usb0 and tried the command "sudo apt-cdrom add -d /media/usb0" to get apt to recognise the USb drive as an "Ubuntu CD" -- a source of package files but apt-get doesn't seem to be finding Xfce.. I try "sudo apt-get install xfce" and "sudo apt-get install xfce4" but neither find the package.. I would prefer to have Xfce but GNOME would be OK too.. My question is, am I doing something wrong? I figured that the Ubuntu Server disk (or rather, my Ubuntu Server USB drive) might not have any desktop environment packages on there so I tried the Xubuntu Desktop disk too (again, from my USB drive). I tried "sudo apt-get install xubuntu-desktop" but it couldn't find the package - even though it is listed under the /casper/ directory in some MANIFEST file. Anyone see where I'm going wrong? Maybe apt-get install is looking somewhere other than my USB drive? Maybe my commands are wrong? Maybe the disks don't even have the desktop environments on!? Thanks in advance guys, any input would be much appreciated. Cheers - James

    Read the article

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • SharePoint 2010 – SQL Server has an unsupported version 10.0.2531.0

    - by Jeff Widmer
    I am trying to perform a database attach upgrade to SharePoint Foundation 2010. At this point I am trying to attach the content database to a Web application by using Windows Powershell: Mount-SPContentDatabase -Name <DatabaseName> -DatabaseServer <ServerName> -WebApplication <URL> [-Updateuserexperience] I am following the directions from this TechNet article: Attach databases and upgrade to SharePoint Foundation 2010.  When I go to mount the content database I am receiving this error: Mount-SPContentDatabase : Could not connect to [DATABASE_SERVER] using integrated security: SQL server at [DATABASE_SERVER] has an unsupported version 10.0.2531.0. Please refer to “http://go.microsoft.com/fwlink/?LinkId=165761” for information on the minimum required SQL Server versions and how to download them. At first this did not make sense because the default SharePoint Foundation 2010 website was running just fine.  But then I realized that the default SharePoint Foundation site runs off of SQL Server Express and that I had just installed SQL Server Web Edition (since the database is greater than 4GB) and restored the database to this version of SQL Server. Checking the documentation link above I see that SharePoint Server 2010 requires a 64-bit edition of SQL Server with the minimum required SQL Server versions as follows: SQL Server 2008 Express Edition Service Pack 1, version number 10.0.2531 SQL Server 2005 Service Pack 3 cumulative update package 3, version number 9.00.4220.00 SQL Server 2008 Service Pack 1 cumulative update package 2, version number 10.00.2714.00 The version of SQL Server 2008 Web Edition with Service Pack 1 (the version I installed on this machine) is 10.0.2531.0. SELECT @@VERSION: Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64)   Mar 29 2009 10:11:52   Copyright (c) 1988-2008 Microsoft Corporation  Web Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (VM) But I had to read the article several times since the minimum version number for SQL Server Express is 10.0.2531.0.  At first I thought I was good with the version of SQL Server 2008 Web that I had installed, also 10.0.2531.0.  But then I read further to see that there is a cumulative update (hotfix) for SQL Server 2008 SP1 (NOT the Express edition) that is required for SharePoint 2010 and will bump the version number to 10.0.2714.00. So the solution was to install the Cumulative update package 2 for SQL Server 2008 Service Pack 1 on my SQL Server 2008 Web Edition to allow SharePoint 2010 to work with SQL Server 2008 (other than the SQL Server 2008 Express version). SELECT @@VERSION (After installing Cumulative update package 2): Microsoft SQL Server 2008 (SP1) - 10.0.2714.0 (X64)   May 14 2009 16:08:52   Copyright (c) 1988-2008 Microsoft Corporation  Web Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (VM)

    Read the article

  • How do I prevent a website being misclassified by Websense?

    - by Jeff Atwood
    I received the following email from a user of one of our websites: This morning I tried to log into example.com and I was blocked by Websense at work because it is considered a "social networking" site or something. I assume the websense filter is maintained by a central location, so I'm hoping that by letting you guys know you can get it unblocked. per Wikipedia, Websense is web filtering or Internet content-control software. This means one (or more) of our sites is being miscategorized by Websense as "social networking" and thus disallowed for access at any workplace that uses Websense to control what websites their users can and cannot access during work hours. (I know, they are monsters!) How do we dispute this Websense classification error, as our websites should generally be considered "information technology" and never "social networking"? How do we know what category Websense has put our sites in, so we can pro-actively make sure they're not wrong?

    Read the article

  • Difference between 12.04 and 12.04.1

    - by Jeff
    I recently did a fresh install of Ubuntu 12.04 two days ago. Or at least I thought it was 12.04, but actually 12.04.1. Now I'm having errors popping up from the grub loader. Error: no video mode activated which was apparently resolved in this bug# 699802. However these workarounds are for 11.xx and not working for me. I never had these errors before with 12.04 and now I'm getting them. What's the difference between 12.04 and 12.04.1? Off the bat I notice that the kernels are different 12.04 uses 3.2.0-26-generic-pae 12.04.1 uses 3.2.0-29-generic after an immediate sudo apt-get update upgrade 12.04.1 uses 3.2.0-30-generic I have two other computers running 12.04 (not 12.04.1) and they're working fine. The computer that I'm currently was working fine (with 12.04) previously too. Should I roll back my kernel to 3.2.0-26?

    Read the article

  • Blogging: MacJournal & Windows Live Writer

    - by Jeff Julian
    One thing I have learned about using a Mac is that Apple does not produce very many free applications. The ones they do are typically not full featured and to get the full feature you need to buy their upgraded version. For example, when it comes to Photo editing and cataloging, iPhoto is not a solution for large files or RAW processing, you need Aperture which is a couple hundred dollars. I am not complaining because I like it when an application has a product team who generates revenue with it, because the chance of them being around longer seems to be higher. What is my point in all of this? Apple does not produce a product for blogging/journaling like Microsoft does with Windows Live Writer. I love Windows Live Writer. If you are on a Windows box, it is a required tool in your toolbox if you publish to a blog. The cleanness of the interface, integration with most blog APIs and ability to Save Local or Publish as a Draft make capturing your thoughts for publishing now or later a very easy task. My hope is that Microsoft will port it to the Mac, but I don’t believe that will ever happen as it is not a revenue generating product and Microsoft doesn’t often port to a Mac besides Remote Desktop Connection and MSN Messenger. For my configuration I used to use only Boot Camp on my two MacBook Pros I have owned in the past three years because I’m a PC, but after four different rebuilds (not typically due to Windows, but Boot Camp or Parallels) I decided to move off the Boot Camp platform and to VMWare Fusion. This is a complete separate blog post that I should spec out in MacJournal, but I now always boot into the Mac OS and use Fusion for my AJI Software VM or my client’s VMs. It just seems to work better for me and I have a very nice way to backup my Windows environments with VMWare.Needless to say, there was need in my new laptop configuration for a blogging tool that worked natively on a Mac. I don’t like to power up my machine for writing a document or working on an image and need to boot up a VM just so I can use Windows. Some would say why not just use a Windows laptop and put the MBP on eBay? It is just a preference and right now, I like the Mac OS for day to day work. So in comes MacJournal, part of the current MacHeist package for $19.95 (MacJournal is normally $39.95). This product is definitely not WLW, but WLW is missing some features I like in MacJournal. I hope the price point comes down on MacJournal cause I could see paying $19.95 for it, but it is always hard for me to buy a piece of software for $39.95 when I can use something else. But I am a cheapskate when it comes to software packages. I suggest if you are using a Mac to drop what you are doing pick up the MacHeist bundle today before it is over, but if you are reading this later, than download the trial and see if MacJournal is a solution for you. If you have any other suggestions that are as nice or cheaper, please comment.Product LinksMacJournal by Mariners Software $39.95 (part of MacHeist bundle for $19.95 with only one day left)Windows Live Writer by MicrosoftThis post was created using MacJournal.[Update: The joys of formatting. Make sure if you are a Geekswithblogs.net member that you use this configuration to setup the Metablog formatting of paragraphs correctly]

    Read the article

  • How to configure XRDP to start cinnamon as default desktop session

    - by Jeff
    I was wondering if there is a way to make Cinnamon 1.4 the default environment upon logging in to Ubuntu 12.04. I can install Cinnamon 1.4 without any problems, but I am trying to run XRDP to log in from a Windows machine and would like it to start "Cinnamon session" instead of a Unity session by default. The question is, How can I tell XRDP to use Cinnamon instead of Unity upon logging in? XRDP seems to work much better than any VNC based servers.

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >