Search Results

Search found 23515 results on 941 pages for 'zend view'.

Page 621/941 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • When does implementing MVVM not make sense

    - by Kelly Sommers
    I am a big fan of various patterns and enjoy learning new ones all the time however I think with all the evangelism around popular patterns and anti-patterns sometimes this causes blind adoption. I think most things have individual pros and cons and it's important to educate what the cons are and when it doesn't make sense to make a particular choice. The pros are constantly advocated. "It depends" I think applies most times but the industry does a poor job at communicating what it depends ON. Also many patterns surfaced from inheriting values from previous patterns or have derivatives, which each one brings another set of pros and cons to the table. The sooner we are more aware of the trade off's of decisions we make in software architecture the sooner we make better decisions. This is my first challenge to the community. Even if you are a big fan of said pattern, I challenge you to discover the cons and when you shouldn't use it. Define when MVVM (Model-View-ViewModel) may not make sense in a particular piece of software and based on what reasons. MVVM has a set of pros and cons. Let's try to define them. GO! :)

    Read the article

  • Demoted domain controller and now IIS has permissions issues

    - by tladuke
    I have a machine that was a domain controller and everything else, includin an IIS site. I built new 2 new domain controllers and transferred the FSMO roles and waited a day and then demoted the original domain controller. Now the IIS site says : HTTP Error 401.2 - Unauthorized You are not authorized to view this page due to invalid authentication headers. I have a call in with the web app vendor, but maybe someone can guess what I need to fix now. I haven't looked at IIS since this was installed and am pretty lost. I thought about restoring the machine from a backup, but I think that would be an Active Directory disaster, right? The server is Windows 2008 (not R2). The new DCs are 2008 R2

    Read the article

  • What's more cost effective, Hosting your web startup on Foss or Windows?

    - by user37899
    Hi, Not coming from the windows world, I'm confused about licensing. I think my knowledge may be out of date. Before we gave up with windows web servers (IIS 2), we used to have to pay Client Access licence's. This worked out quite expensive. Is it cheaper to host 1000's of users on Windows than use Free open source software tools? http://serverfault.com/questions/124329/network-load-balancing-efficience-and-limits. This post suggests I can pay $15 a month, for unlimited users. I certainly hope that this is unbiased view, I am a professional and use the right technology for the right job. I hope i am not feeling the wrath of a windows (or linux for that matter) fanboy. Perhaps a Microsoft certified Licensing person can clear this up. Should i be recommending to startups windows servers and products over lamp? Cheers!

    Read the article

  • In MVC , DAO should be called from Controller or Model

    - by tito
    I have seen various arguments against the DAO being called from the Controller class directly and also the DAO from the Model class.Infact I personally feel that if we are following the MVC pattern , the controller should not coupled with the DAO , but the Model class should invoke the DAO from within and controller should invoke the model class.Why because , we can decouple the model class apart from a webapplication and expose the functionalities for various ways like for a REST service to use our model class. If we write the DAO invocation in the controller , it would not be possible for a REST service to reuse the functionality right ? I have summarized both the approaches below. Approach #1 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); new CustomerDAO().save(customer); } } Approach #2 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); customer.save(customer); } } public class Customer { ........... private void save(Customer customer){ new CustomerDAO().save(customer); } } Note- Here is what a definition of Model is : Model: The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In event-driven systems, the model notifies observers (usually views) when the information changes so that they can react. I would need an expert opinion on this because I find many using #1 or #2 , So which one is it ?

    Read the article

  • Loading guest OS's (Windows) localhost through my host's (Mountain Lion) browsers

    - by Jonah Goldstein
    For work, I have to develop in Visual Studio, which I run via VMware's fusion 5. I really want to test via my mac's native browsers for a multitude of reasons. that is, view the IIs web stuffs that my windows VM should expose, in my mac's own native Firefox, Chrome... etc. if i could expose a pretty url, that would be even better, but i would certainly settle for an ugly IP :) I got a decent number of views but no response when I asked in VMware's own boards. Everyone seems to want to go the other direction (developing in sublimetext/textmate serving up through MAMP and exposing it to windows browsers to test) and there seems to be tried a true solutions for this. unfortunately (or fortunately depending on your preference) my startup is pretty entrenched in the visual studio development tools. I'm really hoping that someone knows the answer to this. Thanks :)

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • Can I set up a 2nd home wireless router, with router2 connecting to the internet through a desktop which is wirelessly connected to router1?

    - by gil b.
    Hi, I apologize for the crudeness of my MSPaint drawing, but please view my diagram of what I'd like to accomplish: Proposed home network architecture Currently, all devices are connected to 1 wireless router. I would like to make my own subnet, with a box in-between my subnet and the shared wireless router, so that I can learn about IDS, traffic analysis, etc. I was also given a cisco PIX firewall to play around with, and it'd be an added bonus if I could incorporate that into my network. The reason for this proposed architecture is so that I can monitor all MY traffic, without seeing anything going on with my roommates' traffic. my MAIN Question is, is it possible to have my desktop connect to the wireless router with internet via wireless card AND share that connection via the ethernet card, hooked to wireless router 2? cable modem - wireless router - desktop pc connected wirelessly - wireless router 2 getting internet from wired connection to desktop pc - laptops connected wirelessly The PIX can be left out for now, but I'm wondering if it could eventually be incorporated? THANKS!

    Read the article

  • htaccess - Redirects with more than 1 level deep not working

    - by barfoon
    Hey everyone, Just moved to shared hosting on GoDaddy and Im trying to get my .htaccess rules working. Heres what I have: ErrorDocument 404 /error.php Options FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www\.mydomain\.org$ RewriteRule ^(.*)$ http://mydomain.org/$1 [R=301,L] RewriteRule ^view/(\w+)$ viewitem.php?itemid=$1 [R=301,L] RewriteRule ^category/(\w+)$ viewcategory.php?tag=$1 [R=301,L] RewriteRule ^faq$ faq.php RewriteRule ^about$ about.php RewriteRule ^contact$ contact.php RewriteRule ^submit$ submit.php RewriteRule ^contactmsg$ handler-contact.php All the pages @ the root of the domain seem to be working i.e mydomain.org/faq, mydomain.org/about are working. But whenever I try mydomain.org/category/somecategory, I get a 404. How can I fix my .htaccess to obey these rules that are more than 1 level deep? Thanks,

    Read the article

  • Announcing Oracle Receivables Generic Data Fix (GDF) for Refunds

    - by user793553
    Here's the first of what will be a series of Generic Data Fixes (GDF) to be released by Receivables Development. Generic Data Fix (GDF) are created by development to fix data issues caused by bugs/issues in the application code.  Other Generic Data Fix benefits/features include: Developed for bugs that can cause data issues. Provides a SELECT script that uses an identification/signature query to identify and report all data affected by issue/condition caused by a bug. Allow customers to view and modify what will be fixed. Provides a separate FIX script to fix the data reported by the SELECT script. The FIX script creates backup tables for the data that is fixed/updated. Available on My Oracle Support for download In Release 12, when creating a refund by either of the following methods: Applied a receipt to the Refund activity - which creates an Invoice in Payables Or you went directly into Payables to create a refund for an open Credit Memo in Receivables The Invoice in Payables that is associated to the refund is cancelled, the corresponding refund application or credit memo in Receivables is not properly re-instated. For the receipt application, it still remains applied to the Refund whereas this should be automatically unapplied. For the credit memo, it stays closed instead of getting re-opened. Doc ID 761993.1 includes the patch to make sure this doesn’t happen in the future as well as a GDF script to fix the current data (Script name: ar_std_refund_unapp.sql).  Download the script and run in READ_ONLY_MODE to identify 'refund' applications with this problem. Stay tuned for more GDF scripts coming soon...

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • A quick hello to the Western Kentucky .NET User Group

    - by Muljadi Budiman
    A few days back, I got a chance to speak at the Western Kentucky .NET User Group meeting in Murray, Kentucky.  The opportunity came up because the original speaker, Jeff Blankenburg, had another obligation and was thus unable to come to this meeting.  I volunteered to deliver his presentation, which is an overview of MIX10 conference. It was a great experience for me; got to drive around and do a little bit of sight-seeing – can’t say I’ve ever been to Kentucky before, so first trip ever there.  I got to meet the user group’s current lead, Tom Turner and got to chat and discuss about all kinds of stuff with the other members.  Cheers to Matt Gawarecki and Brandon Sharp! The presentation itself mostly covers new features in Visual Studio 2010, which was recently released on April 12 – got to demonstrate Historical Debugging in IntelliTrace, Parallel Stacks, View Call Hierarchy and show some Extensions.  We also covered some of the new functionalities in Silverlight 4 (using webcams, drag & drop support among others) and I got to show off Scott Guthrie’s Windows Phone 7 Twitter app.  Altogether, it was quite a bit to cover in 70 minutes or so, but I think everyone enjoyed it. Jeff provided me with the presentation slides (which I modify a bit) and demo applications; so I’m putting it up here for those that may be interested in downloading them.  Please keep in mind that all the demos were made with VS2010 RC, so there may be slight tweaks to get it to work on the RTM version.

    Read the article

  • Ctrl W s not splitting windows in vim

    - by rajan sthapit
    I am trying to use Ctrl+ws to split a window in vim. However it is not working in my case. I opened a file using vim filename. Then I pressed Ctrl+ws. But as soon as I press just Ctrl+w it clears the viewport and I could see my shell display before vim opened the file. I mean the view port is replaced by the content just before the file was opened. However, I am still editing the file with vim. Suggestions?

    Read the article

  • accessing live usb files from new hd ubuntu install

    - by Robin Bailey
    After my live USB (ubuntu 12.04 lts) refused to boot, I proceeded to install the same Ubuntu version on the laptop hard drive (a dual boot next to Win xp). This all went well without a hitch. Previous to this, I spent several weeks enjoying and exploring ubuntu from the usb pendrive. During this time I changed lots of settings and customized Firefox and more. Now, I'd like to import the home folder from the usb drive into the new install home folder on the hard disk, which is the purported folder that holds all those special settings to my knowledge. Unfortunately and only being familiar with Windows file systems, the view of the usb file system from the new hdd install is totally perplexing. I can't find anything that looks anywhere close to the original file system. More, I can't find any of the files I had created and stored there, like the LibreOfficeCalc file that has all my passwords (this one is really discouraging) that was stored on the ubuntu desktop. Help me find this file alone and I'll bow down with full apologies to any and all computer gods. Being able to import all those customizing settings into the new install would be a major bonus also, but hey, I'm not greedy. I'll take the passwords file and be happy! And humble! I would be very grateful for some clear, understandable help on this. Thanks

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by ajit.george
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues? Thanks in advance.

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • How do I adjust the actual font sizes in Internet Explorer?

    - by Cyberherbalist
    I like having as much stuff on the screen at once as I can. But as a consequence of getting on in years, small font sizes are starting to become a problem. Internet Explorer has the ability to increase the font size of text in webpages (those pages that don't set the font size explicitly). On the main menu, it is View | Text Size. The choices are: Smallest, Smaller, Medium, Larger, Largest. Medium is on many web pages just a tad too small to me. But if I go to Larger the font size is way too big. My question is, is there a way to set these text size jumps differently from what they seem to be?

    Read the article

  • Oracle E-Business Suite is Helping to Save Lives at the National Marrow Donor Program

    - by Di Seghposs
    To improve the management of its life-saving operations, the National Marrow Donor Program recently modernized its financial and procurement operations by upgrading to Oracle E-Business Suite 12.1.   As the global leader in bone marrow and umbilical cord blood transplants, the NMDP manages a complex ecosystem of donor, patient, hospital, and biological data. “Maintaining accurate data and having an efficient matching process is essential, particularly as our global database of bone marrow patients grows and donor lists expand,” says Bruce Schmaltz, director of finance/controller. “We rely on the Oracle E-Business Suite to ensure our procurement and financial management processes meet the highest standards, enabling our growing non-profit to work swiftly and efficiently to help improve and save lives.” As the non-profit organization and its registry grew larger, NMDP needed a modern platform to store and integrate its financial information and complicated procurement process. It selected Oracle E-Business Suite for its ability to fit seamlessly into NMDP’s enterprise architecture. NMDP initially implemented Oracle E-Business Suite release 12 by leveraging Oracle Business Accelerators, which are rapid implementation tools and templates that help reduce implementation time and costs. With Oracle Financial Management and Oracle Procurement, NMDP has streamlined back-office processes and integrated its procure-to-pay business processes by leveraging industry leading accounts payable, accounts receivable, and general ledger modules. NMDP is currently rolling out Oracle Hyperion Performance Management applications and plans to implement Oracle Order Management and Oracle Advanced Pricing by the end of 2012. Read more details about NMDP’s modernization efforts.  For more updates on Oracle Financial Management Solutions, view our November 2012 Oracle Information InDepth Financial Management newsletter. Subscribe Now. 

    Read the article

  • Passing Parameters to an ADF Page through the URL - Part 2.

    - by shay.shmeltzer
    I showed before how to pass a parameter on the URL when invoking a taskflow (where the taskflow starts with a method call and then a page). However in some simpler scenarios you don't actually need a full blown taskflow. Instead you can use page level parameters defined for your page in the adfc-config.xml file. So below is a demo of this technique. I'm also taking advantage of this video to show the concept of a view object level service method and how to invoke it from your page. P.S. You might wonder - why not just reference #{param.amount} as the value set for the method parameter? Why do I need to copy it into a viewScope parameter? The advantage of placing the value in the viewScope is that it is available even when the page went through several sumbits. For example if you switch the "partialSumbit" property of the "Next" button to false in the above example - the minute that you press the button to go to the next department - the param.amount value is gone. However the ViewScope is still there as long as you stay on this page.

    Read the article

  • Video on Hyperion Tax Provision

    - by Lia Nowodworska - Oracle
    ( in via Jan) EPM Information Development has asked us to remind you about the new video available for Hyperion Tax Provision. You can view it on the OracleEPMWebcasts YouTube channel here: http://bit.ly/1jxLlCy An information rich 4:40 minutes of your time.  So please take a look. The video gives a brief overview of the main features of  Hyperion Tax Provision. You will learn ... That Tax Provision and reporting System builds on the Hyperion Financial Close Reporting Platform That much of the Tax Provision flow process is similar to the Financial Close Process and that the modules have been aligned to work together very closely. That HTP enables you to integrate Book- and Tax Reporting on a common platform. That It uses the technology of HFM, ties in with SmartView and can be used with Hyperion Financial Reporting. That the native integration between the Financial System and the Tax System creates transparency for the Tax Departments and removes bottlenecks in the tax process. More technical information can be found here: Oracle Hyperion Tax Provision Data Sheet Oracle Hyperion Tax Provision White Paper Oracle Hyperion Tax Provision Documentation If you have another 45 minutes to spare and want to get into greater detail, then you can check out the recording of an Advisor Webcast that we did earlier last year: You can find this via KM Doc Oracle Business Analytics Advisor Webcast Schedule and Archive Recordings (Doc ID 1456233.1) -> Select the Tab "Archived 2013" and it is the third from the top: "Oracle Hyperion Tax Provision - Features and Overview with Demo" If you have questions towards that Advisor Webcast, you may participate in the Community Discussion about it. (layout and post: Torben, authorized: Lia)

    Read the article

  • Unable to see hidden files

    - by TheGoodUser-Sp
    I have BitDefender (With last update) on my Windows-7. I want to see hidden files, so from Tools > Folder Options > View , I change the settings as below, and click on OK: But I can't see hidden files. when I double check the Options, I see the settings changed automatically as below : I know this is a virus-like application! checking by a virus total via ProcessExplorer didn't help : How can I understand with process is related to this issue?

    Read the article

  • Exchange 2010 Room Mailbox Calendar Permissions

    - by Brian Mitchell
    Exchange 2010 sp2 Outlook 2007/2010 Server 2008 I have managed to set up several room mailboxes in exchange, people are able to book the rooms and they get a response from the exchange server. this is brilliant. however users are unable to view the calendar of the room mailbox to see what times are available, in a ideal world I would like users to only see if the room is free or not. I dont want users to see the details of the meeting (title, description etc) I have been trying to do this using the following command Add-MailboxFolderPermission -Identity meetingroom -User "Usergroup" -AccessRights AvailabilityOnly -DomainController AD-Server This throws the following error Specified argument was out of the range of valid values. Parameter name: memberRights + CategoryInfo : NotSpecified: (meetingroom:MailboxFolderIdParameter) [Add-MailboxFolderPermission], Argum entOutOfRangeException + FullyQualifiedErrorId : CBC6516F,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Any help on the situation would be brilliant, i have been trying to get this done for a couple of days and im going around in circles.

    Read the article

  • Why does Apache ignore my Directory block?

    - by Codemonkey
    I just moved my projects into a new workstation. I'm having trouble getting my Apache installation to acknowledge my .htaccess files. This is my /etc/apache2/conf.d/dev config file: <Directory /home/codemonkey/dev/myproject/> Options -Indexes AllowOverride All Order Allow,Deny Deny from all </Directory> I know the config file is being included by Apache because it complains if I put erroneous syntax in it (Action 'configtest' fails). My project is reachable through Apache by a symlink in the /var/www directory. The server is running with my user and group, so it has my permissions. My entire dev folder has permissions set to 770 recursively. Despite all this, I'm still getting an indexed display of my project folder when I visit http://localhost/myproject. Why isn't the above config making it impossible to view the folder in the browser?

    Read the article

  • Oracle NoSQL Database: Cleaner Performance

    - by Charles Lamb
    In an earlier post I noted that Berkeley DB Java Edition cleaner performance had improved significantly in release 5.x. From an Oracle NoSQL Database point of view, this is important because Berkeley DB Java Edition is the core storage engine for Oracle NoSQL Database. Many contemporary NoSQL Databases utilize log based (i.e. append-only) storage systems and it is well-understood that these architectures also require a "cleaning" or "compaction" mechanism (effectively a garbage collector) to free up unused space. 10 years ago when we set out to write a new Berkeley DB storage architecture for the BDB Java Edition ("JE") we knew that the corresponding compaction mechanism would take years to perfect. "Cleaning", or GC, is a hard problem to solve and it has taken all of those years of experience, bug fixes, tuning exercises, user deployment, and user feedback to bring it to the mature point it is at today. Reports like Vinoth Chandar's where he observes a 20x improvement validate the maturity of JE's cleaner. Cleaner performance has a direct impact on predictability and throughput in Oracle NoSQL Database. A cleaner that is too aggressive will consume too many resources and negatively affect system throughput. A cleaner that is not aggressive enough will allow the disk storage to become inefficient over time. It has to Work well out of the box, and Needs to be configurable so that customers can tune it for their specific workloads and requirements. The JE Cleaner has been field tested in production for many years managing instances with hundreds of GBs to TBs of data. The maturity of the cleaner and the entire underlying JE storage system is one of the key advantages that Oracle NoSQL Database brings to the table -- we haven't had to reinvent the wheel.

    Read the article

  • Configuring Team Foundation Server Basic on Home Server.

    - by Enrique Lima
    For the installation I selected only the Team Foundation Server role. Then, I opened the Team Foundation Server Administration Console (which I think is a great addition and improvement over the way TFS was configured in the past) to proceed with the configuration of the pieces. Once I selected the Configure Installed Features, the Configuration Center opened up. Now, the choices … In my implementation here I just want to take advantage of Source Control primarily.  I want to be able to store my code and projects.  So, Basic it is! So, the Basic Configuration Wizard opens up.  Now the options to configure are very limited, but we have to provide details for the SQL Server Instance. And now, to select Install SQL Server express.  If you want to take advantage of another system in your environment to host your database, well you could Use an existing SQL Server Instance. Once it has the details it needs, you get a Summary view to confirm your choices. Once, you click next or verify, it runs readiness checks on your system to make sure the installation will have a successful pass.  And we love GREEN! Now, since got the green flag, our next stop is to let the wizard do its magic, click on Configure.  And once again, we love GREEN! We click Next, and … We like a big Green Success sign … We close the Configuration Center … First results … Web Access …  Nothing to show … but we are there! And all this running from a Microsoft Home Server installation.

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >