Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

 

MAALESEF DENİZLİM TURİZM BAŞKA BİR ŞEY!

MAALESEF DENİZLİ'DEKİ TÜM MESLEK GRUBLARINDAN HERKES TURİZMİ BİLİR GİBİ KONUŞUR; 1972 YILINDAN BU YANA TURİZMDEYIM, 10.NCU 5 YILLIK PLAN TURIZM KOMISYON UYELİĞİ YAPTIM, NEZAKETEN SUSUYORUZ, ÇÜNKÜ EN UFAK BİR OLUMSUZ HABER TURİZMİ ETKİLER. ANCAK TURİZMLE ALAKASI SADECE UÇAK BİLETİ ALMAK VE OTELDE KALMAKTAN BAŞKA BİR ŞEY OLMAYAN HERKES TURİZM KONUŞUR, FİKİR BEYAN EDER, DENİZLİ’YE TURLARI GETİRİRLE. HENÜZ GÖRMEDİK TUR OTOBÜSÜ HA DENİZLİMDE GETİRDİKLERİ,  DENİZLİDE KÜTÜREL VARLIKLARIMIZI GEZDİRİRLER, KALE İÇİ VE CAMİLERİMİZİ GÖSTERİRLER VS VS.GEÇİN HEMŞERİM EN SON NE ZAMAN YABANCI TURİST OTOBÜSÜ GÖRDÜNÜZ YENİ CAMİMİZDE. ESKİDEN İKİNCİ TİCARİ YOLA BASTIYALI TURİZM VE FREE TIME TUR OTOBUSLERI OZELLIKLE ISKANDINAV TURISTLER GELİRDİ; NASIL GELİRDİ, KİMLER SEBEP OLURDU SORUN BAKALIM HACI ŞERİF NECİP HELVACI BEYE. YA KARDEŞİM 40 KİŞİLİK YABANCI TURİST OTOBÜSÜ İNDİĞİNDE TUVALETİNİZ YOK BE DENİZLİDE ALLAH AŞKINA. DENİZLİDE HİÇ BİR TURİZMCİ, MİMARLIK, MUHENDİSLİK, HEKİMLİK, BELEDİYECİLİK VD MESLEK GRUBLARI HAKKINDA HIC BİR YORUM YAPMAZ, GORUŞ BİLDİRMEZ.  AMA BENİM KOCA DENİZLİMDE HERKES TURİZM HAKKINDA HERKES PAMUKKALE HAKKINDA KONUŞUR DA KONUŞUR. HİÇBİR DENİZLİLİNİN BENCE NE TURİZM NE DE PAMUKKALE HAKKINDA KONUŞMAYA HAKKI YOKTUR! KİMSE ALINMASIN.NE YAPTI DENİZLİLİ TURİZM İÇİN? KOCA BİR HİÇ! PAMUKKALE’DE DENİZLİLİ BİR İŞ ADAMININ 5 YILDIZLI OTELİ VAR MI? YOK! PAMUKKALE DANIŞMA YÖNLENDİRME TOPLANTILARINA TUM DENİZLİ KATILIMCILARI AĞIZ BİRLİĞİ EDİP ‘’YIKILSIN ANTIK ALANDAKI OTELLER, YOL KAPANSIN’’ VS DİYE GELİRDİ ANKARA’YA. YIKILMASIN REVİZE EDİLSİN DEDİĞİMİZDE SİZ DENİZLİLİSİNİZ ALİ AKTURK DUYGUSAL KONUŞUYORSUNUZ DERLERDİ, BUNA TURİZMCİ ARKADAŞIM NURETTİN KOÇAK’TA ŞAHİTTİR. DENİZLİ TURİZM GEÇMİŞİNDE HİÇ BİR YETKİLİ, NE MUZE, NE KULTUR NEDE TURİZM MÜDÜRLERİ TURİZMCİYE NE YOL GÖSTERMİŞ, NE BİR FİKİR NE DE YÖNLENDİRME YAPMAMIŞLARDIR; SON 30 SENEDİR. DOLAYISI İLE İLGİ GÖSTERMEDİĞİN SAHİPLENMEDİĞİN NE PAMUKKALE HAKKINDA NE DE HİÇ ANLAMADIĞIN VE DESTEKLEMEDİĞİN TURİZMCİDEN VE TURİZMDEN BİR ŞEY BEKLEMENİZ DOĞRU OLMAZ.PAMUKKALEYE TURİST GETİRMEK KOLAY DEĞİLDİR ARKADAŞLAR, SAHİLLERDEN ÇIKAN TURLARDA EN ÇOK ZAMAN HARCANAN TURDUR PAMUKKALE TURU, PROĞRAMI YOĞUNDUR, TURİSTİN GELDİĞİ BÖLGEDE ODASI VE HER ŞEY DAHİL KONAKLAMASI VARDIR GERİ DÖNMEK İSTER, KİMSE NE CAMİYE NE DE DENİZLİ İÇİNDE YAPILACAK MUZEYE TURİST BEKLEMESİN GELMEZ GELEMEZ, ATIP TUTMAYIN, HARİÇTEN GAZEL OKUMAYIN. ANADOLU TURU İLE GELSİN, SAHİLLERDEN GELSİN TURİSTİN PAMUKKALEDE HARCAYACAĞI ZAMAN 3-4 SAATTİR HEPSİ BU. HA ANADOLU TURLARI Kİ SAYILARI İNŞALLAH ARTSIN Kİ, OTELLERİMİZDE BİR GECE DE OLSA KONAKLAMA YAPSIN KİM İSTEMEZ. PAMUKKALEDE VEYA DENİZLİDE TURİSTE NE VEREBİLİRSİNİZ. TRAVERTENLERDE SELFIE ÇEKEİLİYOR TURİST 15 DAKİYA YETER HAVA ÇOK SICAK YA DA KIŞIN SOĞUK, TRAVERTENDE UZANIP NEDEN GÜNEŞLENMESİNİ YASAKLADINIZ SORMAZLAR MI? ESKİDEN TRAVERTENDE GÜNEŞLENEN TURİSLER 10-15 GÜN KALAN TURİSTLER VARDI ÜLKEYE 7 MİLYON TURİST GELİRKEN VE BUNLARIN % 14’Ü PAMUKKALEYE GELİRDİ, BU SAYI DA 936.000 YABANCI TURİSTTİR. KONAKLAMA SAYILARI DA UZUN OLUNCA HAKİKATEN İŞ ADAMLARI OTELLER YAPTILAR, ONLARIN ŞANSSIZLIĞI KORUMA İMAR PLANININ HEM ÖZÜNDEKİ HATALI KARARLAR HEM DE UYGULAMASINDA KAYBEDİLEN ZAMANLAR OLDU. HIERAPOLIS TARIHI BOYUNCA 60.000 İLE 120.000 ARASINDA NÜFUS BARINDIRMIŞ BİR TEKSTİL, ZEYTİNYAĞI, MERMER ŞEHRİ BU HİERAPOLİSLİLER DE TRAVERTENİ KARARTMAMIŞDA HER GÜN GELEN 5-10 BİN TURİST Mİ TRAVERTENLERİ KARARTMIŞ! BU YANLIŞ VE PAMUKKALE TURİZMİNİ BİTİREN YASAKLARA TÜM DENİZLİLERİM İNANDI SES ÇIKARMADI DAHA DESTEKCİ OLDU BAZI STK’LAR. TARİHİNDE TRAVERTENLER HİÇBİR ZAMAN İNSAN ÖĞESİNE YASAKLANMAMIŞTIR.TA Kİ 14 MAYIS 1997 TARİHİNE KADAR. AKSİNE İNSANLARIN TRAVERTENLERDE BULUNMASI İLE TRAVERTENLER DAHA GÜZEL BEYAZLAMIŞ, SUYUN BEYAZLATMA ÖZELLİĞİ İNSANLARIN YAPTIĞI DALGALANMA VE SUYUN İÇİNDEKİ BEYAZLIĞI SAĞLAYAN MÜSEKKİN MADDELERİ ÖTELEMESİ İLE DAHA UÇ NOKTALAR BEYAZLATILMIŞTIR. İNSAN ÖĞESİ İLE TRAVERTEN HAVUZCUKLARI BEYHUDE DERİNLEŞMEMİŞ, YANİ KENARLARINDAN BÜYÜMEMİŞ VE DOLAYISI İLE DAHA AZ TERMAL SU İLE DOLMUŞ VE DAHA ÇABUK SU BUHARLAŞMIŞ VE BEYAZLIK SAĞLAYAN MÜSEKKİN KALSİYUM HİDROKSİT ÇAMURU KOLAY KURUMUŞTUR.ŞU AN TRAVERTEN HAVUZÇUKLARI GEREKSİZ DERİNLİKLERE SAHİP OLMUŞTUR. KURUMALARI UZUN ZAMAN ALMAKTADIR. RÜZGÂRIN GETİRDİĞİ TOZLAR BU UZUN ZAMANDA KURUMAYAN TRAVERTENE İNMEKTE MÜSEKKİN MADDE İÇERİĞİNİ ÖZELLİĞİNİ KAYBETMEKTEDİR. BU TOZ İNMESİ İLE TRAVERTEN HAVUZCUKLARINDA KÜÇÜK OTLAR ÇIKMAKTADIR. DENİZLİ BÖLGESİNDE TURİSTLERİMİZİN UZUN KONAKLAMASI İSTENİYORSA SADECE TERMAL SAĞLIK TURİZM DEĞİL PAMUKKALE ANTİK ALANINDA İYİ NİYETLE DE BAŞLAMIŞ OLSA GEÇMİŞTE YAPILAN BÜYÜK KORUMA AMAÇLI İMAR PLANI HATALARI DÜZELTİLMELİDİR. ANTİK KANALLAR TEKRAR ESKİSİ GİBİ TRAVERTENLERE TERMAL SUYU TAŞIMALIDIR. GÖRSEL OLARAK BERBAT OLAN BETON KANALLARA SON VERİLMELİDİR. KAPALI BETON KANALLAR TRAVERTENLERE BEYAZLATMAYA HAZIR SU TRANSFERİNİ ÖNLEMEKTEDİR. SUYUN AÇIK KANALLARDA SOĞUMASI, BU SAĞUMA ZAMANI İÇİNDE DE KARBONDİOKSİT KAYBETMESİ GEREKLİDİR. ANCAK BU ŞEKİLDE MUHTEŞEM BİR BEYAZLIK ELDE EDİLEBİLİR. KALSİYUMKARBONAT YOĞUNLUĞUNDAKİ 36-37 SANTİGRAT SICAKLIKTAKİ TERMAL SU AÇIK ANTİK KANALLARLA SOĞUYARAK GİDERKEN KANBON DİOKSİT KAYBETMEKTE, SU BEYAZLATACAK MÜSEKKİN MADDE KALSİYUM HİDROKSİT BIRAKMAYA HAZIR HALE GELMEKTEDİR. O KADAR BUYUK ŞEHİR İNŞA ETMİŞ HİERAPOLİSLİLER ŞEHİRİ BAŞTAN SONA MERMER KANALİZASYONLARLA DÖŞERKEN TAPTIKLARI, TERMAL SUYA NEDEN ÖZEL MERMER KANALLAR YAPMAMIŞLAR ACABA! GÜZELİM BEYAZ TRAVERTENLER ESKİDEN OLDUĞU GİBİ 14.MAYIS.1997 TARİHİNDEN ÖNCESİNDEKİ GİBİ İNSANLARA AÇILMALIDIR.TRAVERTENLERİMİZ BİZİM PLAJIMIZDI.SAHİLLERE GELEN TURİSTLERİN PLAJLARDA GÜNEŞLENMESİ YASAKLANIYOR MU HAYIR, AMA MAALESEF DENİZLİ KORUMA KİSVESİ ALTINDA TRAVERTENLERİMİZ HİÇ BİR DENEME YAPILMADAN YASAKLANDI, ÇOK AĞIR BİR KARARDI, DENİZLİ TURİZMİ İÇİN, RAKİPLERİMİZİ SEVİNDİREN BİR KARARDI.HA BİRDE O ZAMANLAR  SAKIN DEMEYİN HİERAPOLİS’TE BİLİM VE BİLİM ADAMLARI YOKTU VS GERİYDİ. HİERAPOLİSLİLERDEN ROMAYA BİRÇOK EĞİTMENLER GİTMİŞTİR ROMA’YA, İAMAN HA... MAALESEF DENİZLİM ATILAN OK GERİ GELMİYOR, AMA DENİZLİ VE PAMUKKALE TURİZMİ İÇİN ÇÖZÜM YOLLARI MÜMKÜN YETERKİ ÖNCE SORUNLARDA ANLAŞALIM, ELBET BİR YOLU VAR DENİZLİ BİRLİK OLSUN YETER….GELECEK SEFERE GÖRÜŞÜRÜZ. ALİ AKTÜRK

 

ГОРОД УСОПШИХ (НЕКРОПОЛЬ)

Традиции похорон и захоронения:        В Хиераполисе имеются могилы разных типов. Первые – это гробницы Тимулуса (курган, окруженный крепидой, вмещавшей несколько захоронений). Сверху, на них устанавливали фигуру фаллоса, как символ плодородия и изобилия. Это семейные могилы. Каждый фамильный склеп, куда хоронили членов одной семьи, был окружен каменными стенами. И за это, семья ежегодно выплачивала определенную сумму денег. Благодаря подведению воды, окружение могил было похоже на цветочный сад. Для подъема к некоторым гробницам, установленных на подиум, сделаны каменные лестницы, которые также использовались и для либаций (святых подношений). Зажиточного усопшего в некрополь сопровождали наемные плакальщицы, а слезы этих плакальщиц собирались в небольшие стеклянные сосуды и опускались в могилу. Это было признаком состоятельности умершего. Затем устраивалась трапеза для участников похоронного шествия.

 

DENİZLİ UYGARLIĞIN BEŞİĞİ

Denizli uygarlığın beşiği Denizli 8 uygarlığa ev sahipliği yaptı. Bu uygarlıklar geride 50 kent bıraktı. Önemli ticaret merkezleri ve en büyük tıp fakültesi bu topraklardaydı.Üzerinde yaşadığımız topraklar MÖ 5500 yılından bu yana, yani 7 bin 500 yıldır uygarlıklara ev sahipliği yapıyor. 13 ü büyük tam 50 antik kent var bu topraklarda. Bugüne kadar; önce Anadolu'nun yerel kavimleri daha sonra Hititler, Lidyalılar Frigler, Helenistik uygarlıklar, Roma, Bizans, Selçuklu ve Osmanlı imparatorlukları verimli toprakları ve coğrafi konumu nedeniyle yerleşmek için tercih ettiler bu toprakları. HANGİ KENT NEREDE?
Anasayfa HABER KAYNAKLARI Planet MySQL
Haber Kaynakları
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • MySQL data archiving: another use for HeatWave Lakehouse
    The ability to store data on Object Storage and retrieve it dynamically when necessary is a notable advantage of Lakehouse when managing MySQL historical data we would like to archive. Let’s illustrate this with the following table: CREATE TABLE `temperature_history` ( `id` bigint unsigned NOT NULL AUTO_INCREMENT, `time_stamp` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `device_id` varchar(30) DEFAULT NULL, `value` decimal(5,2) NOT NULL DEFAULT '0.00', `day_date` date GENERATED ALWAYS AS (cast(`time_stamp` as date)) STORED NOT NULL, PRIMARY KEY (`id`,`day_date`), KEY `device_id_idx` (`device_id`) ) ENGINE=InnoDB AUTO_INCREMENT=129428417 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci SECONDARY_ENGINE=RAPID /*!50500 PARTITION BY RANGE COLUMNS(day_date) (PARTITION p0_before2023_11 VALUES LESS THAN ('2023-11-01') ENGINE = InnoDB, PARTITION p2023_12 VALUES LESS THAN ('2023-12-01') ENGINE = InnoDB, PARTITION p2024_01 VALUES LESS THAN ('2024-01-01') ENGINE = InnoDB, PARTITION p2024_02 VALUES LESS THAN ('2024-02-01') ENGINE = InnoDB, PARTITION p2024_03 VALUES LESS THAN ('2024-03-01') ENGINE = InnoDB, PARTITION p2024_04 VALUES LESS THAN ('2024-04-01') ENGINE = InnoDB, PARTITION p9_future VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB) */ You can notice that the table is also loaded in MySQL HeaWave Cluster (see this previous post). This table is full of records that were generated by IoT devices: select count(*) from temperature_history; +-----------+ | count(*) | +-----------+ | 129428416 | +-----------+ 1 row in set (0.0401 sec) Mind the response time 😉 You can also notice that we have partitions. The plan to save disk space and archive the data on cheap storage is the following: one a partition is not required anymore, we dump its content to an Object Storage bucket. we drop the partition if it’s the first time, we create a second archive table from the the HeatWave load report if needed we load/unload the data on demand we can create a new future partition (optional) Dumping a Partition to Object Storage Now the partition with data before December can be archived. Let’s see how much data this represents: select count(*) from temperature_history partition(p2023_12); +----------+ | count(*) | +----------+ | 1894194 | +----------+ 1 row in set (0.0373 sec) Object Storage Bucket & PAR Now we need to create a bucket where we will archive the data for the temperature_history table: We will use a Pre-Authenticated Request (PAR) to write and read data in Object Storage: It’s important to allow read and writes and the listing of the objects: And this is the PAR’s url we will use: Data Transfer We use MySQL Shell dumpTables() utility to copy the data from the partition to Object Storage using the PAR url: util.dumpTables("piday", ["temperature_history"], "https://<namespace>.objectstorage.<region>.oci.customer-oci.com/p/<random>/n/<namespave>/b/temperature_history_archive/o/", {"dialect": "csv", "compression": "none", "partitions": {"piday.temperature_history": ["p2023_12"]} }) It’s very important to specify to not compress the files as by default they are. From OCI Console, we can see all the generated files and we need to keep only the csv ones: If you have a very large table with a log of files (chunks), deleting all the .idx files is a long process, you can then use a tool like Fileon – S3 Browser: Partition Management Now that the data is stored in Object Storage, we can delete the partition: alter table temperature_history drop partition p2023_12; As we are working on the partition, we can already add an extra one (above optional point 5) using the following syntax: alter table temperature_history reorganize partition p9_future into ( partition p2024_05 values less than ('2024-05-01'), partition p9_future values less than (maxvalue) ); Archive Table Creation The first time, we need to create the archived table in which we will load the Object Storage data used for Lakehouse. Lakehouse We need to have a HeatWave Cluster with Lakehouse enabled: We need to prepare the system with the table and partition we want to load. For this operation, we need to set 3 variables: db_list: list of the database we will load dl_tables: list of the tables we will load and the information related to the format and the Object Storage location options: preparation of the arguments for the heatwave_load procedure. We also parse and include the dl_tables variable db_list We start by defining the db_list. In our case it’s easy has we only use one database: piday SET @db_list = '["piday"]'; dl_tables We need to provide information related to the table we want to create and specify where and how is the table stored: SET @dl_tables='[{"db_name": "piday","tables": [ {"table_name": "temperature_history_archive", "dialect": {"format": "csv", "field_delimiter": "\,", "record_delimiter": "\\n" }, "file": [{"par": "https://...<the_par_url>..."}] } ] }]'; options We can now generate the options variable that we will use as argument for our procedure: SET @options = JSON_OBJECT('mode', 'dryrun', 'policy', 'disable_unsupported_columns', 'external_tables', CAST(@dl_tables AS JSON)); Auto Parallel Load Lakehouse has the capability to create the table for us and load the data into it. But as we want to explicitly use some specific column names instead of using generic ones, we will use the report to create the table and load the data manually in two different steps. This is why we specified dryrun as mode in the @options definition: call sys.heatwave_load(@db_list, @options); We can now retrieve the table’s creation statement and manually modify the name of the columns while creating the table: SELECT log->>"$.sql" AS "Load Script" FROM sys.heatwave_autopilot_report WHERE type = "sql" ORDER BY id\G Let’s modify all the col_X column with the field names we want: CREATE TABLE `piday`.`temperature_history_archive`( `id` int unsigned NOT NULL, `time_stamp` timestamp(0) NOT NULL, `device_id` varchar(28) NOT NULL COMMENT 'RAPID_COLUMN=ENCODING=VARLEN', `value` decimal(4,2) NOT NULL ) ENGINE=lakehouse SECONDARY_ENGINE=RAPID ENGINE_ATTRIBUTE='{"file": [{"par": "https://...<PAR>..."}], "dialect": {"format": "csv", "field_delimiter": ",", "record_delimiter": "\\n"}}'; Once created, we can load the data to the secondary engine: ALTER TABLE /*+ AUTOPILOT_DISABLE_CHECK */ `piday`.`temperature_history_archive` SECONDARY_LOAD; We can verify that the data was loaded correctly: select count(*) from temperature_history_archive; +----------+ | count(*) | +----------+ | 1894194 | +----------+ 1 row in set (0.0299 sec) And later ? Now let’s move forward in time and let’s assume we can archive the data present in the partition p2024_01: select count(*) from temperature_history partition(p2024_01); +----------+ | count(*) | +----------+ | 50034435 | +----------+ 1 row in set (0.1562 sec) We need to dump the data in our Object Storage bucket, unfortunately we need to use a different folder at the dump needs an empty destination. We will use a temporary folder in our bucket: util.dumpTables("piday",["temperature_history"],"https://<PAR URL>/tmp/", {"dialect": "csv","compression":"none", "partitions": {"piday.temperature_history": ["p2024_01"]}}) Acquiring global read lock Global read lock acquired Initializing - done 1 tables and 0 views will be dumped. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 40% (50.03M rows / ~124.50M rows), 1.63M rows/s, 104.01 MB/s Dump duration: 00:00:24s Total duration: 00:00:24s Schemas dumped: 1 Tables dumped: 1 Data size: 3.44 GB Rows written: 50034435 Bytes written: 3.44 GB Average throughput: 141.20 MB/s This produces a lot of files: As we only need the csv ones, I will use a fuse module to mount the Object Storage Bucket on my system and use the usual commands to move and delete files (see this post on how to setup s3fs-fuse). [fred@dell ~] $ mkdir mnt [fred@dell ~] $ s3fs temperature_history_archive ~/mnt/ -o endpoint=us-ashburn-1 \ -o passwd_file=~/.passwd-ocifs \ -o url=https://xxx.compat.objectstorage.us-ashburn-1.oraclecloud.com/ \ -onomultipart -o use_path_request_style [fred@dell ~] $ ls mnt piday@temperature_history@p2023_12@0.csv tmp piday@temperature_history@p2023_12@@1.csv [fred@dell ~/mnt] $ mv tmp/*.csv . [fred@dell ~/mnt] $ rm -rf tmp We can now unload and load the data back in Lakehouse: ALTER TABLE /*+ AUTOPILOT_DISABLE_CHECK */ `piday`.`temperature_history_archive` SECONDARY_UNLOAD; ALTER TABLE /*+ AUTOPILOT_DISABLE_CHECK */ `piday`.`temperature_history_archive` SECONDARY_LOAD; select count(*) from temperature_history_archive; +----------+ | count(*) | +----------+ | 51928629 | +----------+ 1 row in set (0.0244 sec) We can safely remove the partition from the production table: alter table temperature_history drop partition p2024_01; If we don’t need the archive data, we can simply unload it again (and load it back later): ALTER TABLE /*+ AUTOPILOT_DISABLE_CHECK */ `piday`.`temperature_history_archive` SECONDARY_UNLOAD; select count(*) from temperature_history_archive; ERROR: 3877: Secondary engine operation failed. Reason: "Table `piday`.`temperature_history_archive` is not loaded in HeatWave" Conclusion In this article, we explored the advantages of utilizing HeatWave Lakehouse to effectively store MySQL data for archiving purposes and reloading it as needed. It is noteworthy to mention that the entire archived dataset, consisting of 51 million records, was loaded from Object Storage within a relatively impressive time frame of 26.58 seconds on my MySQL HeatWave OCI instance. This can help saving disk space on your MySQL HeatWave instance and increase performance by cleaning up large tables. Bypassing the creation of the json and idx files, and the possibility to dump data on a non empty destination would be two very nice features for MySQL Shell dump utility. Enjoy archiving your data in MySQL, HeatWave and Lakehouse !

  • Upgrading GitHub.com to MySQL 8.0
    Over 15 years ago, GitHub started as a Ruby on Rails application with a single MySQL database. Since then, GitHub has evolved its MySQL architecture to meet the scaling and resiliency needs of the platform—including building for high availability, implementing testing automation, and partitioning the data. Today, MySQL remains a core part of GitHub’s infrastructure and our relational database of choice. This is the story of how we upgraded our fleet of 1200+ MySQL hosts to 8.0. Upgrading the fleet with no impact to our Service Level Objectives (SLO) was no small feat–planning, testing and the upgrade itself took over a year and collaboration across multiple teams within GitHub. Motivation for upgrading Why upgrade to MySQL 8.0? With MySQL 5.7 nearing end of life, we upgraded our fleet to the next major version, MySQL 8.0. We also wanted to be on a version of MySQL that gets the latest security patches, bug fixes, and performance enhancements. There are also new features in 8.0 that we want to test and benefit from, including Instant DDLs, invisible indexes, and compressed bin logs, among others. GitHub’s MySQL infrastructure Before we dive into how we did the upgrade, let’s take a 10,000-foot view of our MySQL infrastructure: Our fleet consists of 1200+ hosts. It’s a combination of Azure Virtual Machines and bare metal hosts in our data center. We store 300+ TB of data and serve 5.5 million queries per second across 50+ database clusters. Each cluster is configured for high availability with a primary plus replicas cluster setup. Our data is partitioned. We leverage both horizontal and vertical sharding to scale our MySQL clusters. We have MySQL clusters that store data for specific product-domain areas. We also have horizontally sharded Vitess clusters for large-domain areas that outgrew the single-primary MySQL cluster. We have a large ecosystem of tools consisting of Percona Toolkit, gh-ost, orchestrator, freno, and in-house automation used to operate the fleet. All this sums up to a diverse and complex deployment that needs to be upgraded while maintaining our SLOs. Preparing the journey As the primary data store for GitHub, we hold ourselves to a high standard for availability. Due to the size of our fleet and the criticality of MySQL infrastructure, we had a few requirements for the upgrade process: We must be able to upgrade each MySQL database while adhering to our Service Level Objectives (SLOs) and Service Level Agreements (SLAs). We are unable to account for all failure modes in our testing and validation stages. So, in order to remain within SLO, we needed to be able to roll back to the prior version of MySQL 5.7 without a disruption of service. We have a very diverse workload across our MySQL fleet. To reduce risk, we needed to upgrade each database cluster atomically and schedule around other major changes. This meant the upgrade process would be a long one. Therefore, we knew from the start we needed to be able to sustain operating a mixed-version environment. Preparation for the upgrade started in July 2022 and we had several milestones to reach even before upgrading a single production database. Prepare infrastructure for upgrade We needed to determine appropriate default values for MySQL 8.0 and perform some baseline performance benchmarking. Since we needed to operate two versions of MySQL, our tooling and automation needed to be able to handle mixed versions and be aware of new, different, or deprecated syntax between 5.7 and 8.0. Ensure application compatibility We added MySQL 8.0 to Continuous Integration (CI) for all applications using MySQL. We ran MySQL 5.7 and 8.0 side-by-side in CI to ensure that there wouldn’t be regressions during the prolonged upgrade process. We detected a variety of bugs and incompatibilities in CI, helping us remove any unsupported configurations or features and escape any new reserved keywords. To help application developers transition towards MySQL 8.0, we also enabled an option to select a MySQL 8.0 prebuilt container in GitHub Codespaces for debugging and provided MySQL 8.0 development clusters for additional pre-prod testing. Communication and transparency We used GitHub Projects to create a rolling calendar to communicate and track our upgrade schedule internally. We created issue templates that tracked the checklist for both application teams and the database team to coordinate an upgrade. Project Board for tracking the MySQL 8.0 upgrade scheduleUpgrade plan To meet our availability standards, we had a gradual upgrade strategy that allowed for checkpoints and rollbacks throughout the process. Step 1: Rolling replica upgrades We started with upgrading a single replica and monitoring while it was still offline to ensure basic functionality was stable. Then, we enabled production traffic and continued to monitor for query latency, system metrics, and application metrics. We gradually brought 8.0 replicas online until we upgraded an entire data center and then iterated through other data centers. We left enough 5.7 replicas online in order to rollback, but we disabled production traffic to start serving all read traffic through 8.0 servers. The replica upgrade strategy involved gradual rollouts in each data center (DC).Step 2: Update replication topology Once all the read-only traffic was being served via 8.0 replicas, we adjusted the replication topology as follows: An 8.0 primary candidate was configured to replicate directly under the current 5.7 primary. Two replication chains were created downstream of that 8.0 replica: A set of only 5.7 replicas (not serving traffic, but ready in case of rollback). A set of only 8.0 replicas (serving traffic). The topology was only in this state for a short period of time (hours at most) until we moved to the next step. To facilitate the upgrade, the topology was updated to have two replication chains.Step 3: Promote MySQL 8.0 host to primary We opted not to do direct upgrades on the primary database host. Instead, we would promote a MySQL 8.0 replica to primary through a graceful failover performed with Orchestrator. At that point, the replication topology consisted of an 8.0 primary with two replication chains attached to it: an offline set of 5.7 replicas in case of rollback and a serving set of 8.0 replicas. Orchestrator was also configured to blacklist 5.7 hosts as potential failover candidates to prevent an accidental rollback in case of an unplanned failover. Primary failover and additional steps to finalize MySQL 8.0 upgrade for a databaseStep 4: Internal facing instance types upgraded We also have ancillary servers for backups or non-production workloads. Those were subsequently upgraded for consistency. Step 5: Cleanup Once we confirmed that the cluster didn’t need to rollback and was successfully upgraded to 8.0, we removed the 5.7 servers. Validation consisted of at least one complete 24 hour traffic cycle to ensure there were no issues during peak traffic. Ability to Rollback A core part of keeping our upgrade strategy safe was maintaining the ability to rollback to the prior version of MySQL 5.7. For read-replicas, we ensured enough 5.7 replicas remained online to serve production traffic load, and rollback was initiated by disabling the 8.0 replicas if they weren’t performing well. For the primary, in order to roll back without data loss or service disruption, we needed to be able to maintain backwards data replication between 8.0 and 5.7. MySQL supports replication from one release to the next higher release but does not explicitly support the reverse (MySQL Replication compatibility). When we tested promoting an 8.0 host to primary on our staging cluster, we saw replication break on all 5.7 replicas. There were a couple of problems we needed to overcome: In MySQL 8.0, utf8mb4 is the default character set and uses a more modern utf8mb4_0900_ai_ci collation as the default. The prior version of MySQL 5.7 supported the utf8mb4_unicode_520_ci collation but not the latest version of Unicode utf8mb4_0900_ai_ci. MySQL 8.0 introduces roles for managing privileges but this feature did not exist in MySQL 5.7. When an 8.0 instance was promoted to be a primary in a cluster, we encountered problems. Our configuration management was expanding certain permission sets to include role statements and executing them, which broke downstream replication in 5.7 replicas. We solved this problem by temporarily adjusting defined permissions for affected users during the upgrade window. To address the character collation incompatibility, we had to set the default character encoding to utf8 and collation to utf8_unicode_ci. For the GitHub.com monolith, our Rails configuration ensured that character collation was consistent and made it easier to standardize client configurations to the database. As a result, we had high confidence that we could maintain backward replication for our most critical applications. Challenges Throughout our testing, preparation and upgrades, we encountered some technical challenges. What about Vitess? We use Vitess for horizontally sharding relational data. For the most part, upgrading our Vitess clusters was not too different from upgrading the MySQL clusters. We were already running Vitess in CI, so we were able to validate query compatibility. In our upgrade strategy for sharded clusters, we upgraded one shard at a time. VTgate, the Vitess proxy layer, advertises the version of MySQL and some client behavior depends on this version information. For example, one application used a Java client that disabled the query cache for 5.7 servers—since the query cache was removed in 8.0, it generated blocking errors for them. So, once a single MySQL host was upgraded for a given keyspace, we had to make sure we also updated the VTgate setting to advertise 8.0. Replication delay We use read-replicas to scale our read availability. GitHub.com requires low replication delay in order to serve up-to-date data. Earlier on in our testing, we encountered a replication bug in MySQL that was patched on 8.0.28: Replication: If a replica server with the system variable replica_preserve_commit_order = 1 set was used under intensive load for a long period, the instance could run out of commit order sequence tickets. Incorrect behavior after the maximum value was exceeded caused the applier to hang and the applier worker threads to wait indefinitely on the commit order queue. The commit order sequence ticket generator now wraps around correctly. Thanks to Zhai Weixiang for the contribution. (Bug #32891221, Bug #103636) We happen to meet all the criteria for hitting this bug. We use replica_preserve_commit_order because we use GTID based replication. We have intensive load for long periods of time on many of our clusters and certainly for all of our most critical ones. Most of our clusters are very write-heavy. Since this bug was already patched upstream, we just needed to ensure we are deploying a version of MySQL higher than 8.0.28. We also observed that the heavy writes that drove replication delay were exacerbated in MySQL 8.0. This made it even more important that we avoid heavy bursts in writes. At GitHub, we use freno to throttle write workloads based on replication lag. Queries would pass CI but fail on production We knew we would inevitably see problems for the first time in production environments—hence our gradual rollout strategy with upgrading replicas. We encountered queries that passed CI but would fail on production when encountering real-world workloads. Most notably, we encountered a problem where queries with large WHERE IN clauses would crash MySQL. We had large WHERE IN queries containing over tens of thousands of values. In those cases, we needed to rewrite the queries prior to continuing the upgrade process. Query sampling helped to track and detect these problems. At GitHub, we use Solarwinds DPM (VividCortex), a SaaS database performance monitor, for query observability. Learnings and takeaways Between testing, performance tuning, and resolving identified issues, the overall upgrade process took over a year and involved engineers from multiple teams at GitHub. We upgraded our entire fleet to MySQL 8.0 – including staging clusters, production clusters in support of GitHub.com, and instances in support of internal tools. This upgrade highlighted the importance of our observability platform, testing plan, and rollback capabilities. The testing and gradual rollout strategy allowed us to identify problems early and reduce the likelihood for encountering new failure modes for the primary upgrade. While there was a gradual rollout strategy, we still needed the ability to rollback at every step and we needed the observability to identify signals to indicate when a rollback was needed. The most challenging aspect of enabling rollbacks was holding onto the backward replication from the new 8.0 primary to 5.7 replicas. We learned that consistency in the Trilogy client library gave us more predictability in connection behavior and allowed us to have confidence that connections from the main Rails monolith would not break backward replication. However, for some of our MySQL clusters with connections from multiple different clients in different frameworks/languages, we saw backwards replication break in a matter of hours which shortened the window of opportunity for rollback. Luckily, those cases were few and we didn’t have an instance where the replication broke before we needed to rollback. But for us this was a lesson that there are benefits to having known and well-understood client-side connection configurations. It emphasized the value of developing guidelines and frameworks to ensure consistency in such configurations. Prior efforts to partition our data paid off—it allowed us to have more targeted upgrades for the different data domains. This was important as one failing query would block the upgrade for an entire cluster and having different workloads partitioned allowed us to upgrade piecemeal and reduce the blast radius of unknown risks encountered during the process. The tradeoff here is that this also means that our MySQL fleet has grown. The last time GitHub upgraded MySQL versions, we had five database clusters and now we have 50+ clusters. In order to successfully upgrade, we had to invest in observability, tooling, and processes for managing the fleet. Conclusion A MySQL upgrade is just one type of routine maintenance that we have to perform – it’s critical for us to have an upgrade path for any software we run on our fleet. As part of the upgrade project, we developed new processes and operational capabilities to successfully complete the MySQL version upgrade. Yet, we still had too many steps in the upgrade process that required manual intervention and we want to reduce the effort and time it takes to complete future MySQL upgrades. We anticipate that our fleet will continue to grow as GitHub.com grows and we have goals to partition our data further which will increase our number of MySQL clusters over time. Building in automation for operational tasks and self-healing capabilities can help us scale MySQL operations in the future. We believe that investing in reliable fleet management and automation will allow us to scale github and keep up with required maintenance, providing a more predictable and resilient system. The lessons from this project provided the foundations for our MySQL automation and will pave the way for future upgrades to be done more efficiently, but still with the same level of care and safety. If you are interested in these types of engineering problems and more, check out our Careers page. The post Upgrading GitHub.com to MySQL 8.0 appeared first on The GitHub Blog.

  • Backup Mastery: Running Percona XtraBackup in Docker Containers
    Ensuring the security and resilience of your data hinges on having a robust backup strategy, and Percona XtraBackup (PXB), our open source backup solution for all versions of MySQL, is designed to make backups a seamless procedure without disrupting the performance of your server in a production environment.When combined with the versatility of Docker containers, it becomes a dynamic duo, offering a scalable approach to data backup and recovery. Let’s take a look at how they work together.Working with Percona Server for MySQL 8.1 and PXB 8.1 Docker imagesStart a Percona Server for MySQL 8.1 instance in a Docker containerPercona Server for MySQL has an official Docker image hosted on Docker Hub. For additional details on how to run an instance in a Docker environment, refer to this section in Percona Documentation:sudo docker run --name percona-server-8.1 -v mysql_data:/var/lib/mysql -v /var/run/mysqld:/var/run/mysqld -p 3306:3306 -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=mysql -d percona/percona-server:8.1.0sudo docker run – This is the command to run a Docker container--name percona-server-8.1 – Assigns the name “percona-server-8.1” to the Docker container-v mysql_data:/var/lib/mysql – Creates a Docker volume named “mysql_data” and mounts it to the “/var/lib/mysql” directory inside the container. This is typically used to store MySQL data persistently.-v /var/run/mysqld:/var/run/mysqld–  Mounts the host’s “/var/run/mysqld” directory to the container’s “/var/run/mysqld” directory. This can be useful for sharing the MySQL socket file for communication between processes. -p 3306:3306 –  Maps port 3306 on the host to port 3306 on the container. This is the default MySQL port, and it allows you to access the MySQL server running inside the container from the host machine.-e MYSQL_ROOT_HOST=% –  Sets an environmental variable MYSQL_ROOT_HOST to ‘%’  (which means any host). This is often used to allow root connections from any host.-e MYSQL_ROOT_PASSWORD=mysql – Sets an environmental variable MYSQL_ROOT_PASSWORD to ‘mysql’. This is the password to the MySQL root user-d – Run the container in the background (detached mode).percona/percona-server:8.1.0 – Specifies the Docker image to use for creating the container. In this case, it is the Percona Server for MySQL version 8.1.0Note: To work with Percona Server for MySQL 8.0 Docker images, replace the Docker image name with percona/percona-server:8.0 and Docker container name with percona-server-8.0. To work with Percona Server for MySQL 5.7 Docker images, replace the Docker image name with percona/percona-server:5.7 and Docker container name with percona-server-5.7. Percona XtraBackup 8.1 can only take backups of Percona Server for MySQL 8.1. Similarly, Percona XtraBackup 8.0 and Percona XtraBackup 2.4 can only take backups of Percona Server for MySQL 8.0 and 5.7, respectively. Add data to the databaseLet’s add some data to the Percona Server database. Create a test database and add a table t1 inside with five rows.sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -e "CREATE DATABASE IF NOT EXISTS test;" >/dev/null 2>&1 sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -e "CREATE TABLE test.t1(i INT);" >/dev/null 2>&1 sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -e "INSERT INTO test.t1 VALUES (1), (2), (3), (4), (5);" >/dev/null 2>&1Note:In the case of Percona Server for MySQL 8.0, replace the container name with  percona-server-8.0. In the case of Percona Server for MySQL 5.7, replace the container name with  percona-server-5.7. Run Percona XtraBackup 8.1 in a container, take a backup, and prepareThe Docker command runs Percona XtraBackup 8.1 within a container using the data volume of the Percona Server container (percona-server-8.1). It performs a MySQL backup and stores the data on the volume (pxb_backup_data). The container is removed  (--rm) after execution, providing a clean and efficient solution for MySQL backup operation. In the case of Percona XtraBackup 8.0 or 2.4, replace the Docker image name in the below command to percona/percona-xtrabackup:8.0 or percona/percona-xtrabackup:2.4 respectively.sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "xtrabackup --backup --datadir=/var/lib/mysql/ --target-dir=/backup_81 --user=root --password=mysql ; xtrabackup --prepare --target-dir=/backup_81"Stop the Percona Server containerBefore attempting to restore the backup, make sure the Percona Server container is stopped.sudo docker stop percona-server-8.1Note:In the case of Percona Server for MySQL 8.0, sudo Docker stop percona-server-8.0. In the case of Percona Server for MySQL 5.7, sudo Docker stop percona-server-5.7. Remove the MySQL data directoryThis step ensures that the MySQL data directory is empty before you attempt the --copy-back operation. Remember to replace the Docker image and container names in case Percona Server for MySQL 8.0 or 5.7 is used.sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "rm -rf /var/lib/mysql/*"Run Percona XtraBackup 8.1 in a container to restore the backupThe Docker command uses the data volume from a Percona Server for MySQL 8.1 container (percona-server-8.1) and runs Percona XtraBackup 8.1 within a separate container. The command executes the xtrabackup --copy-back operation, restoring MySQL data from the specified directory (/backup_81) to the MySQL data directory (/var/lib/mysql).sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "xtrabackup --copy-back --datadir=/var/lib/mysql/ --target-dir=/backup_81"Note:When Percona XtraBackup 8.0 is used, replace the Docker image name to percona/percona-xtrabackup:8.0 and Percona Server container name to percona-server-8.0 When Percona XtraBackup 2.4 is used, replace the Docker image name to percona/percona-xtrabackup:2.4 and Percona Server container name to percona-server-5.7 respectively. Start the Percona Server container to verify the restored dataWhen we stop and remove the original Percona Server container, the ownership and permission of the files in the mounted volumes may change. A more secure and targeted approach would be to identify the correct user and group IDs needed for the MySQL process and set the ownership accordingly. sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "chown -R mysql:mysql /var/lib/mysql/"This sets the correct ownership for the MySQL data directory. Now, start the Percona Server instance inside the container.sudo docker start percona-server-8.1Once the server is started, fetch the total number of records in the test.t1 table to verify the correctness of the restored data.sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -Bse 'SELECT * FROM test.t1;' | grep -v password 1 2 3 4 5SummaryTo sum up, Percona XtraBackup is an essential tool for data protection because it provides a dependable and effective backup for MySQL databases. Its easy integration with Docker containers increases its usefulness even more by offering a scalable and adaptable method for recovering and backing up data.We encourage users to continue using Percona XtraBackup and hope that this blog is useful. Happy MySQLing!Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows.Download Percona XtraBackup

  • Installing MySQL Innovation Release on a Raspberry Pi
    Microprocessors have become quite common and they for a variety of reasons. Check out this blog post on how to install MySQL on a Raspberry Pi to use it as a database server.

  • MySQL Interview Questions: Wrong Answers Only
    During an interview or while having general discussions, I have found some funny responses that can be easily classified as “Wrong Answers,” but at times, they’re thought-provoking or involve a deep meaning within. This blog is regarding some of the usual MySQL database conversations and responses, which can appear “wrong” or “funny,” but there’s actually more to them. I will share a selection of such seemingly “wrong” or whimsical responses and take a closer look at the valuable lessons and perspectives they offer.Let the “MySQL Interview” begin. Q: How will you improve a slow query? A: Let’s not execute it at all. A query avoided is a query improved.While this is a fact, we should carefully consider whether a query is necessary before executing it. Avoiding unnecessary queries and fetching only the required data can significantly optimize the query’s performance.An approach to improve a query which cannot be avoided will be:Monitor slow query log and use pt-query-digest to generate a summary report for slow queries. Use an explain statement in MySQL to understand the query execution plan, offering insights into table access order, index usage, and potential performance bottlenecks. Additional readMike’s blog on How to Find and Tune a Slow SQL Query Q: What is your disaster recovery (DR) strategy? A: We have a replica under our primary database.Hmm, a replica seems like a straightforward response, but it is not a comprehensive disaster recovery strategy. In reality, relying solely on a replica under the primary server is not sufficient for a robust disaster recovery plan.In a disaster recovery (DR) strategy, it is essential to consider multiple aspects, naming a fewData backup High availability Failover mechanisms Offsite storage While having a replica is beneficial for load balancing and read scaling, it does not cover all disaster scenarios. Additional readBaron’s quick note on why you cannot rely on a replica for DR  Q: What about delayed replica? A: Well, it is our delayed disaster recovery.“What about delayed replica?” you may ask. Well, it is a delayed disaster-in-waiting. 🙂 A lot depends on how strong your monitoring strategy is and how fast you can react to the DR call.The delayed replica surely complements regular real-time replicas by providing an additional layer of DR protection as compared to the active primary. But when disaster strikes and, importantly, is detected within the configured replica-delay, it provides a bit of an easy recovery option. That said, if the delayed replica is hosted on the same infrastructure/data center, it is vulnerable to the same disaster affecting the primary.It should surely help provide a good backup plan to guard against human error, logical error, data corruption, etc.Additional readWalter’s ultimate guide of MySQL Backup and Recovery Best Practices.   Q: What is one of your favourite (and common) security worst practices? A: Usage of .my.cnf fileThe .my.cnf file is typically used to store login credentials for MySQL, allowing users to connect to the database without providing credentials explicitly.  We all know that saving plaintext passwords in this file is a significant security risk, as it could lead to unauthorized access if the file system is compromised. The same risk is present while using the password on the command prompt.Additional readUse MySQL Without a Password (and Still be Secure) How to Secure MySQL – Percona Community MySQL Live Stream    Q: What will you do to alter a table sized 10T? A: Nothing. I will not.Well, the natural response would be to suggest looking for ONLINE ALTER options using tools like pt-online-schema-change or gh-ost. While those answers seem correct, would you really be able to alter a 10T table? Think about the time and resources required for such an activity. Clearly, 10T is just a number to represent a gigantic table size to give a perspective.The counter question would be, “Why do you have such a large table in the database?”. Since the size is “terrantic” (terabyte-sized), further growth is highly likely; there should either be an archiving strategy or some change in application logic to have a manageable table size.Large tables in your production will cost your query performance, cause inefficient reading and writing, slow backup/restores, and introduce challenges in application changes and database upgrades. It is important to understand and monitor the table growth in your system and work on possible table archiving strategies.The Percona Monitoring and Management dashboard does list the large tables by size, by rows, and even tables that are getting to table-full situations. Finally, one trivia question, I request that you respond in the comments.MySQL has a single database object, which is actually double. You can’t see either of them, yet you can query! What is that?Additional readLearn about Percona’s online schema change tool. ConclusionBefore concluding, I invite you to share your own playful takes on MySQL-related questions. As we wrap up, let’s emphasize the importance of going beyond the obvious when tackling questions. Sometimes, the right answer requires a deeper dive, and that’s where the true understanding lies. Until next time, happy MySQL-ing!

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism

Pamukkale Thermal Tourism


ZİYARETÇİ SAYISI

Bugün20
Dün37
Bu Hafta133
Bu Ay250
Toplam539736

PAMUKKALE THERMAL TOURISM