Search Results

Search found 95201 results on 3809 pages for 'system data sqlite'.

Page 257/3809 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • T-SQL Tuesday #005 : SSRS Parameters and MDX Data Sets

    - by blakmk
    Well it this weeks  T-SQL Tuesday #005  topic seems quite fitting. Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.   @Blakmk

    Read the article

  • Alerting system for Munin

    - by akirk
    I am very satisfied with munin as a monitoring tool (as I only have a simple 2-server setup) but its alerting system is very annoying as you can only configure it to send an e-mail upon every check which generates a warning or an error. It seems like the only option is to use Nagios but in Debian it has Apache as a dependency and I already use nginx on my monitoring machine. All I want is to have the possibility to silence/acknowledge the alarm while I am working on a fix, so that I don't get bombarded with e-mails. Nagios seems like an oversized solution for that anyway. Is there any simple solution for that or am I the only one who feels like he needs such a tool?

    Read the article

  • how to set owner and permission to a cryptsetup made device?

    - by Antoine Rodriguez
    I have an encrypted loopback volume. I need to mount and umount manually the volume so I use cryptsetup luksOpen and cryptsetup luksClose . However, When I invoke this command it pops up the /dev/mapper device under all the sessions under gnome/xfce/kde/unity ... And then it let the user to mount (with password), expulse and unmount the volume. It's quite annoying in a multi user server (you are working on your files and the volume is being unmounted). How can I define ownership and permission on the device ? I've tried chown and chmod approach witch gives nothing. Cryptsetup doesn't have any options that let you do that. crypttab auto mount the filesystem on boot witch is unwanted (only manual mount)

    Read the article

  • System 67 error scheduled task to transfer files

    - by grom
    Running directly on command line the batch script works. But when scheduled to run (Windows 2003 R2 server) as the local administrator, I get the following error: D:\ScadaExport\exported>ping 192.168.10.78 Pinging 192.168.10.78 with 32 bytes of data: Reply from 192.168.10.78: bytes=32 time=11ms TTL=61 Reply from 192.168.10.78: bytes=32 time=15ms TTL=61 Reply from 192.168.10.78: bytes=32 time=29ms TTL=61 Reply from 192.168.10.78: bytes=32 time=10ms TTL=61 Ping statistics for 192.168.10.78: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 10ms, Maximum = 29ms, Average = 16ms D:\ScadaExport\exported>net use Z: \\192.168.10.78\bar-pccommon\scada\ System error 67 has occurred. The network name cannot be found. Any ideas? Google is turning up nothing useful, just keep finding results relating to DNS etc, but using IP address here.

    Read the article

  • What is Granularity?

    - by tonyrogerson
    Granularity defines “the lowest level of detail”; but what is meant by “the lowest level of detail”? Consider the Transactions table below: create table Transactions ( TransactionID int not null primary key clustered, TransactionDate date not null, ClientID int not null, StockID int not null, TransactionAmount decimal ( 28 , 2 ) not null, CommissionAmount decimal ( 28 , 5 ) not null ) A Client can Trade in one or many Stocks on any date – there is no uniqueness to ClientID, Stock and TransactionDate...(read more)

    Read the article

  • Operation System Clone from a Motherboard to Another

    - by Devian
    Lately we had an idea with my Family of building a Super Computer from Scratch. So while we were planning on building our setup, one idea came to my head that it seems possible but i want also your opinions. Lets say that we have 2 ATX Motherboards and 1 MicroATX. Motherboard Setup: 1x ASUS Rampage Extreme Black Edition 1x Intel Core i7 4960x 4x GTX Titan 8x 8GB 1866 Ram Motherboard Setup: 1x SuperMicro X9DRG-QF 2x Intel Xeon E7-8890V2 1x nVIDIA QUADRO K6000 4x nVIDIA Tesla K40 128 GB 1866 Ram And imagine a Solid State Drive with a Switch connected to both of the MotherBoards Can i edit & copy all the data of The first motherboard's RAM to The Other's to be able to continue operating my current Operation System after switching the SSD to the Second Motherboard, from the Second MotherBoard and vice versa? Les say my "Switch Application" modifies everything the Kernel needs to believe nothing happend and continuing its operation from the same point the first motherboard stopped. (Changes on the Device List, CPU Cores, Drivers... etc)

    Read the article

  • Survey: How much data do you work with?

    - by James Luetkehoelter
    Andy isn't the only one that can ask a survey question. This is something I really curious about because many of the answers or recommendations or rants in blogs are not universably applicable to every database - small databases must sometimes be treated differently, and uber databases are just a pain (and fun at the same time). So, how would you classify most of the databases you work with: 1) Up to 50GB 2) 50-500GB 3) 500GB - 2TB 4) DEAR GOD THAT"S TOO MUCH INFORMATION! Share this post: email it!...(read more)

    Read the article

  • silent failure while creating odbc data source

    - by Peter
    I just got really confused trying to create an ODBC data source in Windows 2003 R2. I can create a connection to my chosen server (a MS SQL Server) on the "user DSN" tab, but when I try to do the same thing on the "system DSN tab", the process fails but without an error message. I am able to connect to the target database fine at the end of configuring a new data source, but when I click OK, the data source just isn't there. No error message, no sign that anything went amiss, other than the lack of a new data source. Very annoying, as I had to repeat the process a few times to make sure I wasn't crazy. Anybody got any hints? I suspect it is a permission problem of some sort but since there is no error message, I don't know where to start.

    Read the article

  • LDAP System Authentication in Ubuntu

    - by andrew
    Hi, I'm having a bit of an issue with system authentication against LDAP in Ubuntu. The LDAP server is OpenLDAP on Ubuntu 10.10, and the client is Ubuntu 10.10 also. I've set up the client by following the "LDAP Authentication" steps at https://help.ubuntu.com/10.10/serverguide/C/openldap-server.html apt-get install libnss-ldap; auth-client-config -t nss -p lac_ldap; pam-auth-update I've done these steps on the server and been able to see LDAP users when running getent passwd. Doing the same steps on the client, getent passwd does not return any LDAP users. Any ideas?

    Read the article

  • dnssec zonesigner ignoring out-of-zone data

    - by jordi12100
    I am trying to configure DNSSec with BIND9 on CentOS 6.4 running DirectAdmin control panel. I am using this tutorial to make it work: https://www.dnssec-tools.org/wiki/index.php/Zonesigner But I can't get it work... When I run this command: zonesigner --genkeys jordikroon.nl.db jordikroon.nl.db.signed I get this error: jordikroon.nl.db:17: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:18: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:22: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:29: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:33: ignoring out-of-zone data (jordikroon.nl) zone jordikroon.nl.db/IN: has no NS records zone jordikroon.nl.db/IN: not loaded due to errors. I can't find anything on the web about this error. This is my zone db file: $TTL 14400 @ IN SOA ns1.ghservers.org. hostmaster.jordikroon.nl. ( 2013090703 14400 3600 1209600 86400 ) jordikroon.nl. 14400 IN NS ns1.ghservers.org. jordikroon.nl. 14400 IN NS ns2.ghservers.org. cp 14400 IN A 85.17.32.228 ftp 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN A 85.17.32.228 localhost 14400 IN A 127.0.0.1 mail 14400 IN A 85.17.32.228 pop 14400 IN A 85.17.32.228 smtp 14400 IN A 85.17.32.228 www 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN MX 10 mail jordikroon.nl. 14400 IN TXT "v=spf1 a mx ip4:85.17.32.228 ~all" localhost 14400 IN AAAA ::1 How do I have to fix this? All IN keywords are being ignored. Any help is welcome:-)

    Read the article

  • SSRS Parameters and MDX Data Sets

    - by blakmk
    Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • How to setup Secure SemiPublic Revision Control System

    - by user24912
    I have a windows server with a project configured with a revision control system. Suppose it's GIT or SVN or .... Suppose there are 10 people around the globe working on this project. The first thing that comes in to mind is to secure the connection between these programmer and the server with SSH. but my problem is that the a hacker can destroy the server if he gets the SSH username and password user account (tell me if i'm wrong). So I need a secure way to let thoes programmers push their revision to the server. Any ideas would be lovely

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • Windows 7 System process reading/writing like crazy

    - by Mats Ekberg
    I have a problem that my windows 7 computer sometimes starts accessing the disk like crazy for maybe 10 minutes at a time. The process in question is the "system" process. I have disabled superfetch and hibernation on my computer, if that makes any difference. I disabled those to see if they were the cause of the problems, but no change. I have 6 GB of RAM and only the web browser was started when I took the screenshot, so I don't think it was thrashing due to page faults. Any ideas on how to find the cause of this?

    Read the article

  • Highlight Word add-in for Visual Studio 2010 [SSDT]

    - by jamiet
    I’ve just been alerted by my colleague Kyle Harvie to a Visual Studio 2010 add-in that should prove very useful if you are an SSDT user. Its simply called Highlight all occurrences of selected word and does exactly what it says on the tin, you highlight a word and it shows all other occurrences of that word in your script: There’s a limitation for .sql files (which I have reported) where the highlighting doesn’t work if the word is wrapped in square brackets but what the heck, its free, it takes about ten seconds to install….install it already! @Jamiet

    Read the article

  • Debian based OS: Show system notification on clipboard copy/paste events

    - by ifischer
    When working with Linux (Debian based), I often experience problems in relation to the copy paste commands: I copy something with Ctrl+C or by selecting it with the mouse, but when I try to paste it somewhere else, the clipboard is empty and nothing happens - whatever reason caused this. This annoys me, as I have to switch to the source application again and again copy the text. So to really be sure if something was copied into the clipboard I would like it if the window manager (be it Gnome, KDE, etc) would show a Desktop Notification (for example powered by libnotify) as soon as anything is copied into the clipboard, showing the content that has been copied into the clipboard. Is that possible by using one of the many clipboard managers out there (glippy, clipit, ...)? If not, can I write a (DBus?)-hook, which is triggered during copy-to-clipboard-actions, showing the content which is to copy inside the Desktop Notification? (The latter approach would be more interesting, since I'm interested in hooking into system events in general)

    Read the article

  • Telecomunication SID model and resources [on hold]

    - by andygluk
    There is a SID model well-known in telecom industry. Following this model you define resources as resources owned by your enterprise, and then you build resource-oriented services on top of it and then customer-oriented services and so on... So everything is based on enterprise-owned resources, which you have to identify first. What I am looking for and what I am asking is some alternative to this model, build not on enterprise-owned resources, but on resources sell by enterprise. Say, you are selling licenses for using your products. So instead of building model on top of enterprise resources you may be interested to build it on top of licenses you are selling.

    Read the article

  • Is Pluto a Planet or Not? [Video]

    - by Asian Angel
    Have you ever wondered why Pluto lost its status as a planet? This video explains why it happened and shows the set of criteria that scientists use to determine a celestial body’s status as a planet or not. Is Pluto a planet? [via Geeks are Sexy] HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • Removing Duplicate Data From SQL Query Output For Display On A Web Page [migrated]

    - by doubleJ
    I had asked a similar question on stackoverflow but didn't really get anywhere. This page shows the output that I'm currently getting from my MSSQL server. I have a table of venue information (name, address, etc...) that our events happen on. Separately, I have a table of the actual events that are scheduled (an event may happen multiple times in one day and/or over multiple days). I join those tables with this query: <?php try { $dbh = new PDO("sqlsrv:Server=localhost;Database=Sermons", "", ""); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sql = "SELECT TOP (100) PERCENT dbo.TblSermon.Day, dbo.TblSermon.Date, dbo.TblSermon.Time, dbo.TblSermon.Speaker, dbo.TblSermon.Series, dbo.TblSermon.Sarasota, dbo.TblSermon.NonFlc, dbo.TblJoinSermonLocation.MeetingName, dbo.TblLocation.Location, dbo.TblLocation.Pastors, dbo.TblLocation.Address, dbo.TblLocation.City, dbo.TblLocation.State, dbo.TblLocation.Zip, dbo.TblLocation.Country, dbo.TblLocation.Phone, dbo.TblLocation.Email, dbo.TblLocation.WebAddress FROM dbo.TblLocation RIGHT OUTER JOIN dbo.TblJoinSermonLocation ON dbo.TblLocation.ID = dbo.TblJoinSermonLocation.Location RIGHT OUTER JOIN dbo.TblSermon ON dbo.TblJoinSermonLocation.Sermon = dbo.TblSermon.ID WHERE (dbo.TblSermon.Date >= { fn NOW() }) ORDER BY dbo.TblSermon.Date, dbo.TblSermon.Time"; $stmt = $dbh->prepare($sql); $stmt->execute(); $stmt->setFetchMode(PDO::FETCH_ASSOC); foreach ($stmt as $row) { echo "<pre>"; print_r($row); echo "</pre>"; } unset($row); $dbh = null; } catch(PDOException $e) { echo $e->getMessage(); } ?> So, as it loops through the query results, it creates an array for each record and ends up like this: Array ( [Day] => Tuesday [Date] => 2012-10-30 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => The Ark Church [Pastors] => Alan & Joy Clayton [Address] => 450 Humble Tank Rd. [City] => Conroe [State] => TX [Zip] => 77305.0 [Phone] => (936) 756-1988 [Email] => [email protected] [WebAddress] => http://www.thearkchurch.org ) Array ( [Day] => Wednesday [Date] => 2012-10-31 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => The Ark Church [Pastors] => Alan & Joy Clayton [Address] => 450 Humble Tank Rd. [City] => Conroe [State] => TX [Zip] => 77305.0 [Phone] => (936) 756-1988 [Email] => [email protected] [WebAddress] => http://www.thearkchurch.org ) Array ( [Day] => Tuesday [Date] => 2012-11-06 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => Fellowship Of Faith Christian Center [Pastors] => Michael & Joan Kalstrup [Address] => 18999 Hwy. 59 [City] => Oakland [State] => IA [Zip] => 51560.0 [Phone] => (712) 482-3455 [Email] => [email protected] [WebAddress] => http://www.fellowshipoffaith.cc ) Array ( [Day] => Wednesday [Date] => 2012-11-14 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => Faith Family Church [Pastors] => Michael & Barbara Cameneti [Address] => 8200 Freedom Ave NW [City] => Canton [State] => OH [Zip] => 44720.0 [Phone] => (330) 492-0925 [Email] => [WebAddress] => http://www.myfaithfamily.com ) As you can see, The Ark Church and its associated contact information is duplicated, so when I work with those arrays and output them to the page, I see a bunch of duplicate content. I'd like to remove the duplicate information so that I get results similar to this: The Ark Church Alan & Joy Clayton 450 Humble Tank Rd. Conroe, TX 77305 (936) 756-1988 [email protected] http://www.thearkchurch.org Meetings: Tuesday, 2012-10-30 07:00 PM Wednesday, 2012-10-31 07:00 PM Fellowship Of Faith Christian Center Michael & Joan Kalstrup 18999 Hwy. 59 Oakland, IA 51560 (712) 482-3455 [email protected] http://www.fellowshipoffaith.cc Meetings: Tuesday, 2012-11-06 07:00 PM Faith Family Church Michael & Barbara Cameneti 8200 Freedom Ave NW Canton, OH 44720 (330) 492-0925 http://www.myfaithfamily.com Meetings: Wednesday, 2012-11-14 07:00 PM It doesn't necessarily have to end up like that (I'm not looking for code specific for these results, but a concept of how to not show the duplicated information). I'm assuming that an additional foreach or while will do it, but I haven't figured out any logic that says <?php if ($location == $previouslocation) echo ""; ?>.

    Read the article

  • Embedded .swf file in .pfd-Ubuntu 10.04

    - by Thanos
    I have just finished a presentation in LaTeX. In this very .pdf file I have included a .swf animation(done with adobe flash CS5 in windows) which starts when you click on it. While I have already installed a relevant player(swfdec flash player) neither document viewer nor okular are able to reproduce it. I tried with my player to make sure that the file is not corrupted and the result was that it can be produced. I tried the same .pdf file in windows using adobe reader and there is no problem there. The embedded file can be reproduced with no problem. So I thought of installing adobe in ubuntu. I tried there to see if the problem was solved. Things got a bit better. Adobe could understand that there is something there, so when clicked I got a message that I had to get the proper player. When I clicked on a relevant button I expected to open my browser in a player's page. Instead nothing happened. If I place my mouse's cursor next to the space that defines my animation the is a "message" stating "Media File(application/x-shockwave flash)". The next step was to install Adobe Flash player, but I couldn't find the standallone player;only the browser's plugs... How can I get this .swf file play in pdf?

    Read the article

  • VMware Workstation 7.x error loading operating system help

    - by StealthRT
    Hey all i am using the windows verison of VMware Workstation 7.x and I am getting this error when i start my VM error loading operating system That only started to happen when i tried to load that vmdk file into the program VirtualBox. Prior to this it ran just fine. Ever since then i have been unable to start it in VMware Workstation 7.x... I've already tried to repair the vmdk file but when i do it tells me there are no errors found? I used vmware-vdiskmanager.exe -R "c:\blah\my vm disk.vmdk" Anyone else have any more suggestions i could try? It's a 300+GB VM so i really don't want to lose it!!! Thanks, David

    Read the article

  • Remote Desktop Windows 7 into a XP sp3 system print issue

    - by user50963
    I have a windows 7 laptop that I use to remote in to work, which is a XP sp3. I have a brother MFC-8670dn printer. I have the win7 print drivers installed and working on the win7. I made local printers accessible over the RDP. I installed the xp drivers for the Brother printer. So my question is, " Is there a way that I can print from my win7 machine remotely connected to a xp sp3 system"? Or is there no way that I can put the correct drivers on the xp machine to have it redirect to my laptop(win7)

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >